CN116913480A - Medical image display method, system, device and storage medium - Google Patents

Medical image display method, system, device and storage medium Download PDF

Info

Publication number
CN116913480A
CN116913480A CN202311100690.3A CN202311100690A CN116913480A CN 116913480 A CN116913480 A CN 116913480A CN 202311100690 A CN202311100690 A CN 202311100690A CN 116913480 A CN116913480 A CN 116913480A
Authority
CN
China
Prior art keywords
medical image
image
coordinate system
coordinate position
point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311100690.3A
Other languages
Chinese (zh)
Inventor
史庆辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan United Imaging Healthcare Co Ltd
Original Assignee
Wuhan United Imaging Healthcare Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan United Imaging Healthcare Co Ltd filed Critical Wuhan United Imaging Healthcare Co Ltd
Priority to CN202311100690.3A priority Critical patent/CN116913480A/en
Publication of CN116913480A publication Critical patent/CN116913480A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/40ICT specially adapted for the handling or processing of medical images for processing medical images, e.g. editing
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H40/00ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices
    • G16H40/60ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices for the operation of medical equipment or devices
    • G16H40/63ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices for the operation of medical equipment or devices for local operation
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Health & Medical Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Medical Informatics (AREA)
  • Epidemiology (AREA)
  • General Health & Medical Sciences (AREA)
  • Primary Health Care (AREA)
  • Public Health (AREA)
  • Biomedical Technology (AREA)
  • Radiology & Medical Imaging (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Business, Economics & Management (AREA)
  • General Business, Economics & Management (AREA)
  • Measuring And Recording Apparatus For Diagnosis (AREA)

Abstract

The specification discloses a medical image display method, a system, a device and a storage medium, wherein the method comprises the following steps: acquiring a first primitive of a first medical image and determining a first anchor point for defining the first primitive; acquiring user operation information and identification points in the first medical image, and determining a second medical image based on the user operation information and a first screen coordinate position of the identification points in a screen coordinate system; and determining a second anchor point corresponding to the second medical image based on the binding relation, and acquiring a second primitive based on the second anchor point.

Description

Medical image display method, system, device and storage medium
Technical Field
The present disclosure relates to the field of medical image technologies, and in particular, to a method, a system, an apparatus, and a storage medium for displaying a medical image.
Background
The medical image is an internal tissue image acquired non-invasively for a target object for medical treatment or medical study. The user may draw (e.g., delineate, mark, etc.) on the medical image through a display screen of the medical image. The display screen may include two layers of a drawing area for displaying the primitive drawn by the user and an image area for displaying the original medical image, and the two layers are superimposed to display the medical image drawn by the user.
However, different or the same user may perform a plurality of different operations on the medical image drawn by the user, and the positions of the pixels of the medical image in the image coordinate system are transformed based on the plurality of operations, thereby obtaining the medical image after the plurality of operations. The drawing area and the image area are independent, so that the graphic elements of the drawing area cannot be synchronously transformed with the medical images of the image area. If the graphic elements of the drawing are transformed based on the user operation alone, the graphic elements are complex and time-consuming, and the pixel coordinate positions can generate larger errors due to the fact that floating point number rounding is performed for a plurality of times on the drawing area and the image area respectively in the process of the user operation for a plurality of times, so that medical images after the user operation are distorted.
Therefore, it is necessary to provide a method for real-time follow-up of ultrasound images and measurements for improving the display efficiency and accuracy of medical images after multiple user operations.
Disclosure of Invention
One aspect of the present specification provides a method of displaying a medical image, the method comprising: acquiring a first primitive of a first medical image and determining a first anchor point for defining the first primitive; acquiring user operation information and identification points in the first medical image, and determining a second medical image based on the user operation information and a first screen coordinate position of the identification points in a screen coordinate system; determining a second anchor point corresponding to the second medical image based on the binding relation, and acquiring a second primitive based on the second anchor point; the binding relationship binds the relationship between the anchor point and the corresponding first pixel point in the medical image.
One aspect of the present specification provides a display system of medical images, the system comprising: a first anchor point determining module for acquiring a first primitive of the first medical image and determining a first anchor point for defining the first primitive; the second medical image acquisition module is used for acquiring user operation information and identification points in the first medical image and determining the second medical image based on the user operation information and the first screen coordinate position of the identification points in the screen coordinate system; the second primitive acquisition module is used for determining a second anchor point corresponding to the second medical image based on the binding relation and acquiring a second primitive based on the second anchor point; the binding relationship binds the relationship between the anchor point and the corresponding first pixel point in the medical image.
One aspect of the present specification provides a display device of a medical image, the device comprising: at least one storage medium storing computer instructions; at least one processor executes the computer instructions to implement a method of displaying medical images.
Another aspect of the present specification provides a computer-readable storage medium storing computer instructions that, when read by a computer, perform a method of displaying a medical image.
Some embodiments of the present description bind an anchor point of a drawing region and a medical image of an image region, may obtain a transformed anchor point based on the transformed medical image by simply calculating a transformation of only the anchor point, thereby obtaining a transformed primitive based on the transformed anchor point, so that the drawing region and the image region may be synchronously transformed based on the simple calculation.
According to the method and the device, based on the characteristic that an image coordinate system is a coordinate system taking pixels as a unit, the relative position relation between the marking points and other pixels in the medical image in the image coordinate system is kept unchanged before and after user operation, so that the fourth image coordinate position of the other pixels in the second medical image is obtained based on the second image coordinate position mapping of the marking points in the second medical image, the obtained second medical image can be ensured to be undistorted, and the accuracy is high.
According to the embodiment of the specification, the anchor points and the medical images are bound according to the coincidence relation between the drawing area and the image area, so that the transformed anchor points can be determined based on the transformed medical images based on the binding relation, and the calculation efficiency of determining the second anchor points is improved.
Some embodiments of the present disclosure dynamically adjust the number of buffers for different user operations, so that a user may conveniently invoke a previous medical image, and user experience is improved.
Drawings
The present specification will be further described by way of exemplary embodiments, which will be described in detail by way of the accompanying drawings. The embodiments are not limiting, in which like numerals represent like structures, wherein:
FIG. 1 is a schematic illustration of an application scenario of a display system for medical images shown in accordance with some embodiments of the present description;
FIG. 2 is an exemplary block diagram of a display system of medical images shown in accordance with some embodiments of the present description;
FIG. 3 is an exemplary flowchart of a method of displaying medical images according to some embodiments of the present description;
FIG. 4 is an exemplary flow chart of determining a second medical image shown in accordance with some embodiments of the present description;
FIG. 5 is an exemplary diagram of acquiring a second primitive based on a first primitive and user operational information, shown in accordance with some embodiments of the present description;
fig. 6 is an exemplary schematic diagram of acquiring a second medical image based on a first medical image, according to some embodiments of the present description.
Detailed Description
In order to more clearly illustrate the technical solutions of the embodiments of the present specification, the drawings that are required to be used in the description of the embodiments will be briefly described below. It is apparent that the drawings in the following description are only some examples or embodiments of the present specification, and it is possible for those of ordinary skill in the art to apply the present specification to other similar situations according to the drawings without inventive effort. Unless otherwise apparent from the context of the language or otherwise specified, like reference numerals in the figures refer to like structures or operations.
It should be appreciated that "system," "apparatus," "unit," and/or "module" as used in this specification is a method for distinguishing between different components, elements, parts, portions, or assemblies at different levels. However, if other words can achieve the same purpose, the words can be replaced by other expressions.
As used in this specification and the claims, the terms "a," "an," "the," and/or "the" are not specific to a singular, but may include a plurality, unless the context clearly dictates otherwise. In general, the terms "comprises" and "comprising" merely indicate that the steps and elements are explicitly identified, and they do not constitute an exclusive list, as other steps or elements may be included in a method or apparatus.
A flowchart is used in this specification to describe the operations performed by the system according to embodiments of the present specification. It should be appreciated that the preceding or following operations are not necessarily performed in order precisely. Rather, the steps may be processed in reverse order or simultaneously. Also, other operations may be added to or removed from these processes.
Fig. 1 is a schematic view of an application scenario of a medical image display system according to some embodiments of the present description. In some embodiments, as shown in fig. 1, a display system 100 of medical images may include an imaging device 110, a processing device 120, a terminal device 130, a network 140, and a storage device 150.
In some embodiments, the components of the display system 100 of medical images may be connected in one or more of a variety of ways. By way of example only, as shown in fig. 1, imaging device 110 may be connected to processing device 120 through network 140. As another example, imaging device 110 may be directly connected to processing device 120 (as indicated by the dashed double-headed arrow connecting imaging device 110 and processing device 120). As a further example, the storage device 150 may be connected to the processing device 120 directly or through the network 140. As a further example, terminal device 130 may be coupled to processing device 120 directly (as indicated by the dashed double-headed arrow connecting terminal device 130 and processing device 120) and/or via network 140.
The imaging device 110 may be a device that generates medical images. For example only, the imaging device 110 is an ultrasound device that may transmit ultrasound waves to a target object to acquire corresponding ultrasound data. In some embodiments, the target object may be a human body, an organ, a lesion site, a tumor, or the like, or any combination thereof. For example, the target object may be one or more diseased tissues of the heart.
The processing device 120 may process data and/or information acquired from the imaging device 110, the terminal device 130, and/or the storage device 150. For example, the processing device 120 may acquire medical image data and determine a medical image based on the image data. For another example, the processing device 120 may determine the second medical image based on the user operation information and the first image coordinate position of the identification point in the image coordinate system. In some embodiments, processing device 120 may include a Central Processing Unit (CPU), a Digital Signal Processor (DSP), a system on a chip (SoC), a microcontroller unit (MCU), etc., and/or any combination thereof. In some embodiments, processing device 120 may comprise a computer, a user console, a single server or group of servers, or the like. The server farm may be centralized or distributed. In some embodiments, the processing device 120 may be local or remote. In some embodiments, the processing device 120 may be implemented on a cloud platform. For example only, the cloud platform may include a private cloud, a public cloud, a hybrid cloud, a community cloud, a distributed cloud, an inter-cloud, a multi-cloud, and the like, or any combination thereof. In some embodiments, the processing device 120 or a portion of the processing device 120 may be integrated into the imaging device 110.
The terminal device 130 may display the medical image to the user and receive the user operation. In some embodiments, terminal device 130 may include a display device and/or an input device. The display device may display the first medical image and the second medical image, and the input device may receive an operation of the medical image by a user on the screen. In some embodiments, terminal device 130 may include a mobile device 131, a tablet computer 132, a notebook computer 133, or the like, or any combination thereof. In some embodiments, the terminal device 130 may be part of the processing device 120.
The network 140 may include any suitable network that facilitates the exchange of information and/or data of the display system 100 of medical images. In some embodiments, components of the display system 100 of one or more medical images (e.g., the imaging device 110, the processing device 120, the storage device 150, the terminal device 130) may communicate information and/or data with one or more other components of the display system 100 of medical images over the network 140. For example, processing device 120 may receive medical image data from imaging device 110 via network 140. As another example, processing device 120 may obtain user operations from terminal device 130 via network 140. In some embodiments, network 140 may be and/or include a public network, a private network, a Wide Area Network (WAN)), a wired network, a wireless network, a cellular network, a frame relay network, a virtual private network, a satellite network, a telephone network, a router, a hub, a switch, and the like, or any combination thereof.
Storage device 150 may store data, instructions, and/or any other information. In some embodiments, the storage device 150 may store data acquired from the imaging device 110, the terminal device 130, and/or the processing device 120. In some embodiments, the storage device 150 may be provided with a plurality of buffers for storing medical image buffers corresponding to different user operations. In some embodiments, the storage device 150 may store data and/or instructions that the processing device 120 may execute or use to perform the exemplary methods/systems described in this specification. In some embodiments, the storage device 150 may include mass storage, removable storage, volatile read-write memory, read-only memory (ROM), and the like, or any combination thereof. In some embodiments, the storage device 150 may execute on a cloud platform. In some embodiments, the storage device 150 may be part of the processing device 120.
It should be noted that the above description is provided for illustrative purposes only and is not intended to limit the scope of the present description. Many variations and modifications will be apparent to those of ordinary skill in the art, given the benefit of this disclosure. The features, structures, methods, and other features of the exemplary embodiments described herein may be combined in various ways to obtain additional and/or alternative exemplary embodiments. However, such changes and modifications do not depart from the scope of the present specification.
Fig. 2 is an exemplary block diagram of a display system of medical images shown in accordance with some embodiments of the present description. In some embodiments, the display system 200 of medical images may include a first anchor point determination module 210, a second medical image acquisition module 220, a second primitive acquisition module 230, and/or a recording module 240.
The first anchor point determination module 210 may be configured to acquire a first primitive of the first medical image and determine a first anchor point for defining the first primitive. In some embodiments, the first anchor point determination module 210 may acquire medical image data. In some embodiments, the first medical image and the second medical image are determined based on the same medical image data.
The second medical image acquisition module 220 may be configured to acquire user operation information and identification points in the first medical image, and determine the second medical image based on the user operation information and a first screen coordinate position of the identification points in the screen coordinate system. In some embodiments, the second medical image acquisition module 220 may perform one or more of the following: updating a first screen coordinate position of the identification point in the screen coordinate system based on the user operation information, and acquiring a second screen coordinate position of the identification point in the screen coordinate system; and determining a second medical image according to a second screen coordinate position of the identification point in the screen coordinate system. In some embodiments, the second medical image acquisition module 220 may determine the first screen coordinate position of the identification point in the screen coordinate system from the first image coordinate position of the identification point in the image coordinate system and the first mapping relationship. In some embodiments, the first mapping relationship may map a screen coordinate position of the screen coordinate system with an image coordinate position of the image coordinate system. In some embodiments, the second medical image acquisition module 220 may perform one or more of the following: acquiring a second image coordinate position of the identification point in the image coordinate system based on a second screen coordinate position of the identification point in the screen coordinate system and the first mapping relation; and updating the fourth image coordinate positions of other pixel points in the first medical image in the image coordinate system based on the second image coordinate positions of the identification points in the image coordinate system and the second mapping relation, so as to acquire the second medical image. In some embodiments, the second mapping relationship may map an image coordinate position of the identification point in the image coordinate system with a fourth image coordinate position of other pixels in the medical image in the image coordinate system.
The second primitive obtaining module 230 may be configured to determine a second anchor point corresponding to the second medical image based on the binding relationship, and obtain the second primitive based on the second anchor point. In some embodiments, the binding relationship may bind a relationship between the anchor point and a corresponding first pixel point in the medical image. In some embodiments, the second primitive retrieval module 230 may perform one or more of the following: determining a first pixel point in the second medical image based on a third image coordinate position of the first pixel point in the image coordinate system; and determining a second pixel point corresponding to the first pixel point as a second anchor point based on the binding relation.
The recording module 240 may be used to record medical images. In some embodiments, the logging module 240 may perform one or more of the following operations. Acquiring preference operation data of a user, and determining the number of buffer areas based on the preference operation data; and when the user rollback operation times meet a preset threshold value, adding a buffer area for recording the medical image corresponding to the last operation.
It should be noted that the above description of the medical image display system 200 and its modules is for convenience of description only and is not intended to limit the present description to the scope of the illustrated embodiments. It will be appreciated by those skilled in the art that, given the principles of the apparatus, it is possible to combine the individual modules arbitrarily or to construct a sub-apparatus in connection with other modules without departing from such principles. In some embodiments, the modules disclosed in fig. 2 may be different modules in a system, or may be one module to implement the functions of two or more modules described above. For example, each module may share one memory module, or each module may have a respective memory module. Such variations are within the scope of the present description.
Fig. 3 is an exemplary flowchart of a method of displaying medical images according to some embodiments of the present description. In some embodiments, the process 300 may be performed by the display system 100 of medical images (e.g., the processing device 120) or the display system 200 of medical images. For example, the flow 300 may be stored in a storage device (e.g., storage device 150, a memory unit of a system) in the form of a program or instructions that, when executed by the processing device 120 or the display system 200 of medical images, may implement the flow 300. The operational schematic of the flow 300 presented below is illustrative. In some embodiments, the process may be accomplished with one or more additional operations not described above and/or one or more operations not discussed. In addition, the order in which the operations of flow 300 are illustrated in FIG. 3 and described below is not limiting.
Step 310, a first primitive of a first medical image is acquired and a first anchor point for defining the first primitive is determined. In particular, step 310 may be performed by the first anchor point determination module 210.
The medical image is an internal tissue image acquired non-invasively for a target object for medical treatment or medical study. In some embodiments, the formats of the medical images may include Joint Photographic Experts Group (JPEG) image format, tagged Image File Format (TIFF) image format, graphics Interchange Format (GIF) image format, kodak Flash PiX (FPX) image format, digital Imaging and Communications in Medicine (DICOM) image format, and the like. In some embodiments, the medical image may be a two-dimensional (2 d) image, or a three-dimensional (3 d) image. In some embodiments, the three-dimensional image may be composed of a series of two-dimensional slices or layers.
In some embodiments, the medical image includes, but is not limited to, one or more of an X-ray image, a Computed Tomography (CT) image, a Positron Emission Tomography (PET) image, a Single Photon Emission Computed Tomography (SPECT), a Magnetic Resonance Image (MRI), an Ultrasound (US) image, a Digital Subtraction Angiography (DSA) image, a Magnetic Resonance Angiography (MRA) image, a time-of-flight magnetic resonance image (TOF-MRI), a Magnetoencephalography (MEG), and the like. For ease of illustration, medical images are described as ultrasound images.
The first medical image may be a medical image to be subjected to a user operation. User operations may include, but are not limited to, moving, rotating, flipping, zooming, splitting, stitching, cropping, and the like.
In some embodiments, the first anchor point determination module 210 may acquire medical image data. For example, taking ultrasound image data as an example, the first anchor point determination module 210 may control the ultrasound probe to transmit ultrasound waves to a target object or a portion thereof and receive reflected ultrasound waves of the target object or a portion thereof as medical image data.
In some embodiments, the first medical image may be determined based on medical image data. Continuing with the above example, the first anchor point determination module 210 may display the reflected ultrasound waves as different modes of ultrasound images, e.g., B-mode ultrasound images, M-mode ultrasound images, etc., based on different user requirements.
The primitive may be a basic element describing the trace of the image rendering. Each primitive may correspond to one continuous rendering trace in the image rendering. The first graphical element may be a graphical element that renders the first medical image on a screen. For example, 2 regions of interest in the first medical image are delineated on the screen, and a continuous delineated trace corresponding to each region of interest can be used as a primitive. In some embodiments, the first anchor point determination module 210 may obtain the first primitive through a segmentation model. For example, the first anchor point determination module 210 inputs the first medical image into a segmentation model that outputs a first primitive for segmenting a region of interest in the first medical image. In some embodiments, the first anchor point determination module 210 may obtain, through the terminal device 130, a first primitive drawn by the user.
The first anchor point may be a point for defining a first primitive shape, size, and location. It will be appreciated that different shapes of the first primitives may correspond to different first anchor points. For example, the first primitive is a line segment, and the first anchor point may be two endpoints of the line segment. For another example, the first primitive is a circle, and the first anchor point may be the center of the circle and any point on the circumference.
Fig. 5 is an exemplary diagram illustrating the acquisition of a second primitive based on a first primitive and user operational information according to some embodiments of the present description. As shown in fig. 5, the display area of the screen may include a drawing area and an image area, the image area may be used to display a first medical image, the drawing area may display a first primitive, and the first primitive is elliptical, and then the first anchor point may be any four points on the center and circumference of the ellipse: 1. 2, 3, 4 and 5.
In some embodiments, the first anchor point determination module 210 may automatically determine the first anchor point based on the first primitive and the anchor rule corresponding to the first primitive. In some embodiments, the first anchor point determination module 210 may obtain the first anchor point selected by the user on the screen through the terminal device 130.
To avoid that the first anchor point cannot define a unique first primitive, in some embodiments, the first anchor point determination module 210 may further obtain primitive characteristics of the first primitive. The primitive feature may be a label for defining the primitive shape. For example, the primitive feature may be a shape "line segment", "circle", "ellipse", etc. of the first primitive, or may be a property "center" of the first anchor point "," point on the circumference ", etc. of the first anchor point, or may be a size parameter" radius "," radian "," length ", etc. of the first primitive. For example only, the first anchor point of the first primitive may be two points, and the first primitive may be defined as a particular line segment based on the primitive feature "line segment" and the two first anchor points. As yet another example, the first anchor point of the first primitive is two points, and the first primitive may be defined as a particular circle based on the primitive features "anchor point 1 is the center of a circle" and "anchor point 2 is a point on the circumference". As yet another example, the first anchor point of the first primitive is a point, and the first primitive may be defined as a specific circle based on the primitive characteristics "anchor point 1 is a circle center", "the first primitive is a circle with a radius r".
In some embodiments, the first anchor point determining module 210 may automatically identify the primitive feature of the first primitive, or may obtain the primitive feature input by the user through the terminal device 130, which is not limited in this embodiment.
Step 320, acquiring user operation information and identification points in the first medical image, and determining the second medical image based on the user operation information and the first screen coordinate positions of the identification points in the screen coordinate system. In particular, step 320 may be performed by the second medical image acquisition module 220.
The user operation information is information indicating a user operation performed on the first medical image through the screen. For a detailed description of the user operation on the first image, reference may be made to step 310.
The identification point in the first medical image may be any one pixel point in the first medical image. For example, fig. 6 is an exemplary schematic diagram of acquiring a second medical image based on a first medical image, as shown in fig. 6, in which point a in the first medical image may be selected as an identification point in the first medical image in a drawing area, according to some embodiments of the present description.
The second medical image may be a medical image in which the user operation has been completed. For example, as shown in fig. 6, the user performs a flip-up operation on the first medical image, thereby acquiring the second medical image. In some embodiments, the first medical image and the second medical image may be determined based on the same medical image data. Continuing with the example above, the first medical image and the second medical image may be determined based on the same ultrasound reflected wave.
For a detailed description of determining the second medical image based on the user operation information and the first image coordinate position of the identification point in the image coordinate system, reference may be made to fig. 4 and the description thereof, and the detailed description thereof will not be repeated.
Step 330, determining a second anchor point corresponding to the second medical image based on the binding relationship, and acquiring a second primitive based on the second anchor point. Specifically, step 330 may be performed by the second primitive retrieval module 230.
The binding relationship may bind a relationship between the anchor point and a corresponding first pixel point in the medical image. In some embodiments, the second primitive retrieval module 230 may determine the binding relationship based on the first anchor point and the first medical image.
Specifically, the second primitive obtaining module 230 may obtain the corresponding first pixel point of the first anchor point in the first medical image based on the coincidence relation between the drawing area and the image area in the display area. For example only, as shown in fig. 5, the second primitive obtaining module 230 may obtain the pixel I, II, III, VI, V of the overlapping position of the first anchor points 1, 2, 3, 4, 5 in the first medical image, that is, the first pixel, based on the overlapping relationship of the drawing region and the image region in the display region.
Further, the second primitive obtaining module 230 may obtain a third image coordinate position of the corresponding first pixel point in the image coordinate system, and bind the third image coordinate position of the corresponding first pixel point with the first anchor point, thereby obtaining a binding relationship. The image coordinate system is a coordinate system determined with reference to the direction and position in the medical image. The image coordinate system may be used to represent the relative positional relationship between pixel points in the medical image. In some embodiments, the image coordinate system is a two-dimensional coordinate system in pixels. For example, the X 'and Y' axes of the image coordinate system are parallel to the length and width of the medical image, respectively. The image coordinate position in the image coordinate system refers to the coordinate position of any pixel point in the medical image in the image coordinate system. For example only, if the image coordinate position of the pixel k in the medical image is (2, 3), it indicates that the pixel k is the pixel on the 2 nd row and the 3 rd column in the medical image. It will be appreciated that the image coordinate system of the pixels in the medical image does not change due to the transformation of the medical image. The third image coordinate position is a coordinate position of the first pixel point in the image coordinate system. For example, continuing with the above example, a third image coordinate position of the corresponding first pixel point I, II, III, VI, V in the image coordinate system is acquired, and the third image coordinate position of the corresponding first pixel point I, II, III, VI, V is bound to the first anchor points 1, 2,3, 4, 5, respectively.
In some embodiments, the second primitive retrieval module 230 may determine the first pixel point in the second medical image based on a third image coordinate location of the first pixel point in the image coordinate system, and then determine the corresponding second pixel point in the drawing region as the second anchor point based on a coincidence relationship of the drawing region and the image region in the display region. Continuing with the above example, as shown in fig. 5, a corresponding first pixel I, II, III, VI, V may be determined in the second medical image based on the image coordinate position of the corresponding first pixel I, II, III, VI, V in the image coordinate system, and then a corresponding second pixel 1', 2', 3', 4', 5' of the corresponding first pixel I, II, III, VI, V in the drawing region may be determined as the second anchor point based on the overlapping relationship of the drawing region and the image region in the display region.
In some embodiments of the present disclosure, the anchor point and the medical image are bound according to a coincidence relation between the drawing area and the image area, so that the transformed anchor point can be determined based on the transformed medical image based on the binding relation, thereby improving the calculation efficiency of determining the second anchor point.
In some embodiments, the second primitive retrieval module 230 may anchor the location, size, and shape of the second primitive based on the second anchor point to retrieve the second primitive. Continuing with the above example, as shown in FIG. 5, the location of the second primitive may be determined based on the second anchor point 5', and the shape and size of the second primitive may be determined based on the second anchor points 1', 2', 3', 4', thereby obtaining the second primitive.
In some embodiments, the second primitive retrieval module 230 may anchor the location, size and shape of the second primitive based on the second anchor point and primitive characteristics. For example, the second primitive retrieval module 230 may determine that the second primitive is a line segment ending at two second anchor points based on the two second anchor points and the primitive feature "line segment".
In some embodiments of the present description, the medical image of the image region and the anchor point of the drawing region may be bound by simply computing only the transformation of the anchor point to obtain the transformed anchor point based on the transformed medical image, thereby obtaining the transformed primitive based on the transformed anchor point, so that the drawing region and the image region may be synchronously transformed based on the simple computation.
It should be noted that the above description of flow 300 is provided for illustrative purposes only and is not intended to limit the scope of the present description. Various changes and modifications may be made by one of ordinary skill in the art in light of the description herein. However, such changes and modifications do not depart from the scope of the present specification. In some embodiments, the process 300 may include one or more additional operations, or one or more of the operations described above may be omitted.
From the foregoing, it is known that a plurality of operations (for example, a flip-up operation and a flip-down operation and a flip-left operation) on a medical image by a user may cause information change and/or distortion of the medical image, and thus a buffer may be established to store a buffered image after each operation.
In some embodiments, the recording module 240 may obtain preference operation data of the user and determine the number of buffers based on the preference operation data. The user's preference operation data may include a medical image type operated by the user, a corresponding average preference operation number, and an average rollback operation number. Specifically, the recording module 240 may call a corresponding history operation record based on the account number of the current user, and then the history operation record obtains the preference operation data of the user. For example, a history operation record of the user first is called based on the account number of the user first, the average preference operation number of the user first on the liver ultrasonic image is 3, and the average preference operation number of the user first on the heart ultrasonic image is 4, …. Further, the recording module 240 may determine the number of buffers according to the type of the first medical image and the preference operation data. For example, the recording module 240 may determine that the number of buffers is 3 based on the first medical image being a liver ultrasound image and the average number of preferred operations of the user a on the liver ultrasound image being 3, so as to store the corresponding buffered images for each operation, respectively.
In some embodiments, the recording module 240 may increase the buffer for recording the medical image corresponding to the previous operation in response to the number of user rollback operations satisfying the preset threshold. After the user may perform the rollback operation and perform other operations, the corresponding medical image before the rollback operation needs to be called again. Thus, when the user performs more withdrawal operations, the buffer area can be increased for recording the medical image corresponding to the previous operation.
The preset threshold may be a preset operation number threshold. In some embodiments, the preset threshold may be determined based on user's preferred operational data. For example, the preference operation data of the user a includes that the average number of rollback operations of the user a on the liver ultrasound image is 6, and the number 3 of the preset ratio (e.g. 1/2) of the average number of rollback operations may be used as the preset threshold. For example only, the recording module 240 may increase the buffer for recording the medical image corresponding to the last operation before the 3 rd retract operation corresponding to the user retract operation number equal to 3.
In some embodiments of the present disclosure, the number of buffers may be dynamically adjusted for different user operations, so that a user may conveniently invoke a previous medical image, and user experience may be improved.
Fig. 4 is a flow chart illustrating an exemplary method of determining a second medical image according to some embodiments of the present description. In some embodiments, the process 400 may be performed by the display system 100 of medical images (e.g., the processing device 120) or the display system 200 of medical images (e.g., the second medical image acquisition module 220). For example, the flow 400 may be stored in the storage device 150 in the form of a program or instructions that, when executed by the display system 100 of medical images (e.g., the processing device 120) or the display system 200 of medical images (e.g., the second medical image acquisition module 220), may implement the flow 400. The operational schematic of the flow 400 presented below is illustrative. In some embodiments, the process may be accomplished with one or more additional operations not described above and/or one or more operations not discussed. In addition, the order in which the operations of flowchart 400 are illustrated in FIG. 4 and described below is not limiting.
Step 410, based on the user operation information, updating the first screen coordinate position of the identification point in the screen coordinate system, and obtaining the second screen coordinate position of the identification point in the screen coordinate system.
The screen coordinate system is a coordinate system determined with reference to the direction and position in the screen. In some embodiments, the screen coordinate system is a three-dimensional coordinate system in pixels. For example, the X, Y and Z axes of the screen coordinate system may be directed to the right, lower and interior of the screen, respectively. The screen coordinate position of the screen coordinate system refers to the coordinate position of any point in the medical image in the screen coordinate system.
The first screen coordinate position may be a screen coordinate position of the identification point in the first medical image in a screen coordinate system. In some embodiments, the second medical image acquisition module 220 may determine the first screen coordinate position of the identification point in the screen coordinate system from the first image coordinate position of the identification point in the image coordinate system and the first mapping relationship.
The first image coordinate position may be an image coordinate position of the identification point in the first medical image in the image coordinate system. The first mapping relationship may map a screen coordinate position of the screen coordinate system with an image coordinate position of the image coordinate system. Specifically, the first mapping relationship may map the coordinate position of any point in the medical image in the screen coordinate system (i.e. the screen coordinate position) and the coordinate position in the image coordinate system (i.e. the image coordinate position), respectively. In some embodiments, the second medical image acquisition module 220 may represent the first mapping relationship by a preset first mapping function, a first mapping table, or the like.
Specifically, the second medical image acquisition module 220 may map the first image coordinate position of the identification point to a corresponding first screen coordinate position through the first mapping relationship. For example, as shown in fig. 6, the identification point is a point, and the first image coordinate position of the a point can be mapped to the first screen coordinate position of the a point through the first mapping relationship.
The second screen coordinate position may be a screen coordinate position of the identification point in the second medical image in the screen coordinate system. In some embodiments, the second medical image acquisition module 220 may correspondingly transform the first screen coordinate position of the identification point in the screen coordinate system based on the operation information, thereby acquiring the second screen coordinate position. For example, continuing with the example shown in fig. 6, the operation information includes performing an up-down tilting operation on the first medical image, and the second medical image obtaining module 220 may perform the up-down tilting operation on the first medical image, and during the operation, the identification point a in the first medical image is flipped up and down along with the image, and a screen coordinate position of the flipped up-down identification point a in the screen coordinate system is determined as the second screen coordinate position.
In some embodiments, the positions of the portions of the first screen coordinate position and the second screen coordinate position in the image coordinate system may be the same. For example, as shown in fig. 5, when the operation information includes enlargement of the first medical image with the coordinate position 5 as the enlargement center, the first image coordinate position 5 and the second image coordinate position 5 are the same.
Step 420, determining a second medical image based on a second screen coordinate position of the identification point in the screen coordinate system.
Specifically, the second medical image acquisition module 220 may acquire the second image coordinate position of the identification point in the image coordinate system based on the second screen coordinate position of the identification point in the screen coordinate system and the first mapping relation.
The second image coordinate position may be an image coordinate position of the identification point in the second medical image in the image coordinate system. In some embodiments, the second medical image acquisition module 220 may map the second screen coordinate position to the second image coordinate position according to the first mapping relationship. For example, as shown in fig. 6, the second screen coordinate position of the a point may be mapped to the second image coordinate position of the a point through the first mapping relationship. Wherein, reference may be made to step 410 for a detailed description of the first mapping relationship.
Further, the second medical image acquisition module 220 may update a fourth image coordinate position of other pixel points in the first medical image in the image coordinate system based on the second image coordinate position of the identification point in the image coordinate system and the second mapping relation, thereby acquiring the second medical image.
The second mapping relationship may map an image coordinate position of the identification point in the image coordinate system with a fourth image coordinate position of other pixel points in the medical image in the image coordinate system. The fourth image coordinate position refers to the coordinate position of any pixel point in the image coordinate system except the identification point in the medical image. In some embodiments, the second medical image obtaining module 220 may obtain the second mapping relationship based on the first image coordinate position of the identification point in the image coordinate system in the first medical image, the fourth image coordinate position of other pixel points in the image coordinate system in the first medical image, and the user operation information. For example, as shown in fig. 6, after the second medical image acquisition module 220 determines the identification point a on the first medical image, a second mapping relationship between the fourth coordinate position of the other pixel point on the first medical image in the image coordinate system and the first image coordinate position of the identification point a in the image coordinate system may be acquired based on the user operation information.
In some embodiments, the second medical image obtaining module 220 may map the second image coordinate position of the identification point a in the image coordinate system to a fourth coordinate position of other pixels on the second medical image in the image coordinate system, that is, a fourth image coordinate position of other pixels in the updated first medical image in the image coordinate system, based on the second mapping relationship, so as to obtain the second medical image.
It can be understood that the relative positional relationship between the identification point and the other pixel point in the medical image in the image coordinate system remains unchanged before and after the user operation, so that the mapping relationship between the second image coordinate position of the identification point in the image coordinate system and the fourth image coordinate position of the other pixel point in the second medical image (i.e. the fourth image coordinate position of the other pixel point in the updated first medical image in the image coordinate system) also conforms to the second mapping relationship.
In some embodiments of the present disclosure, based on the feature that the image coordinate system is a coordinate system in units of pixels, the relative positional relationship between the marker point and other pixels in the medical image in the image coordinate system is kept unchanged before and after the user operation, so that the fourth image coordinate position of the other pixels in the second medical image is obtained based on the second image coordinate position mapping of the marker point in the second medical image, which can ensure that the acquired second medical image is not distorted and has high accuracy.
It should be noted that the above description of flow 400 is provided for illustrative purposes only and is not intended to limit the scope of the present description. Various changes and modifications may be made by one of ordinary skill in the art in light of the description herein. However, such changes and modifications do not depart from the scope of the present specification. In some embodiments, flow 400 may include one or more additional operations, or one or more of the operations described above may be omitted.
Possible benefits of embodiments of the present description include, but are not limited to: (1) Binding the anchor points of the drawing area and the medical images of the image area, and obtaining the transformed anchor points based on the transformed medical images by simply calculating the transformation of the anchor points, so as to obtain the transformed primitives based on the transformed anchor points, so that the drawing area and the image area can realize synchronous transformation based on the simple calculation; (2) Based on the characteristic that the image coordinate system is a coordinate system taking pixels as a unit, the relative position relation between the marking points and other pixel points in the medical image in the image coordinate system is kept unchanged before and after the user operates the image coordinate system, so that the fourth image coordinate position of the other pixel points in the second medical image is obtained based on the second image coordinate position mapping of the marking points in the second medical image, the obtained second medical image can be ensured to be undistorted, and the accuracy is high; (3) Binding the anchor point and the medical image according to the coincidence relation between the drawing area and the image area, so that the transformed anchor point can be determined based on the transformed medical image based on the binding relation, and the calculation efficiency of determining the second anchor point is improved; (4) The buffer area quantity is adjusted dynamically according to different user operations, so that the user can conveniently call the previous medical image, and the user experience is improved. It should be noted that, the advantages that may be generated by different embodiments may be any one or a combination of several of the above, or any other advantage that may be obtained.
While the basic concepts have been described above, it will be apparent to those skilled in the art that the foregoing detailed disclosure is by way of example only and is not intended to be limiting. Although not explicitly described herein, various modifications, improvements, and adaptations to the present disclosure may occur to one skilled in the art. Such modifications, improvements, and modifications are intended to be suggested within this specification, and therefore, such modifications, improvements, and modifications are intended to be included within the spirit and scope of the exemplary embodiments of the present invention.
Meanwhile, the specification uses specific words to describe the embodiments of the specification. Reference to "one embodiment," "an embodiment," and/or "some embodiments" means that a particular feature, structure, or characteristic is associated with at least one embodiment of the present description. Thus, it should be emphasized and should be appreciated that two or more references to "an embodiment" or "one embodiment" or "an alternative embodiment" in various positions in this specification are not necessarily referring to the same embodiment. Furthermore, certain features, structures, or characteristics of one or more embodiments of the present description may be combined as suitable.
Furthermore, those skilled in the art will appreciate that the various aspects of the specification can be illustrated and described in terms of several patentable categories or circumstances, including any novel and useful procedures, machines, products, or materials, or any novel and useful modifications thereof. Accordingly, aspects of the present description may be performed entirely by hardware, entirely by software (including firmware, resident software, micro-code, etc.), or by a combination of hardware and software. The above hardware or software may be referred to as a "data block," module, "" engine, "" unit, "" component, "or" system. Furthermore, aspects of the specification may take the form of a computer product, comprising computer-readable program code, embodied in one or more computer-readable media.
The computer storage medium may contain a propagated data signal with the computer program code embodied therein, for example, on a baseband or as part of a carrier wave. The propagated signal may take on a variety of forms, including electro-magnetic, optical, etc., or any suitable combination thereof. A computer storage medium may be any computer readable medium that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code located on a computer storage medium may be propagated through any suitable medium, including radio, cable, fiber optic cable, RF, or the like, or a combination of any of the foregoing.
The computer program code necessary for operation of portions of the present description may be written in any one or more programming languages, including an object oriented programming language such as Java, scala, smalltalk, eiffel, JADE, emerald, C ++, c#, vb net, python and the like, a conventional programming language such as C language, visual Basic, fortran2003, perl, COBOL2002, PHP, ABAP, a dynamic programming language such as Python, ruby and Groovy, or other programming languages and the like. The program code may execute entirely on the user's computer or as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or processing device. In the latter scenario, the remote computer may be connected to the user's computer through any form of network, such as a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet), or the use of services such as software as a service (SaaS) in a cloud computing environment.
Furthermore, the order in which the elements and sequences are processed, the use of numerical letters, or other designations in the description are not intended to limit the order in which the processes and methods of the description are performed unless explicitly recited in the claims. While certain presently useful inventive embodiments have been discussed in the foregoing disclosure, by way of various examples, it is to be understood that such details are merely illustrative and that the appended claims are not limited to the disclosed embodiments, but, on the contrary, are intended to cover all modifications and equivalent arrangements included within the spirit and scope of the embodiments of the present disclosure. For example, while the system components described above may be implemented by hardware devices, they may also be implemented solely by software solutions, such as installing the described system on an existing processing device or mobile device.
Likewise, it should be noted that in order to simplify the presentation disclosed in this specification and thereby aid in understanding one or more inventive embodiments, various features are sometimes grouped together in a single embodiment, figure, or description thereof. This method of disclosure, however, is not intended to imply that more features than are presented in the claims are required for the present description. Indeed, less than all of the features of a single embodiment disclosed above.
In some embodiments, numbers describing the components, number of attributes are used, it being understood that such numbers being used in the description of embodiments are modified in some examples by the modifier "about," approximately, "or" substantially. Unless otherwise indicated, "about," "approximately," or "substantially" indicate that the number allows for a 20% variation. Accordingly, in some embodiments, numerical parameters set forth in the specification and claims are approximations that may vary depending upon the desired properties sought to be obtained by the individual embodiments. In some embodiments, the numerical parameters should take into account the specified significant digits and employ a method for preserving the general number of digits. Although the numerical ranges and parameters set forth herein are approximations that may be employed in some embodiments to confirm the breadth of the range, in particular embodiments, the setting of such numerical values is as precise as possible.
Each patent, patent application publication, and other material, such as articles, books, specifications, publications, documents, etc., referred to in this specification is incorporated herein by reference in its entirety. Except for application history documents that are inconsistent or conflicting with the content of this specification, documents that are currently or later attached to this specification in which the broadest scope of the claims to this specification is limited are also. It is noted that, if the description, definition, and/or use of a term in an attached material in this specification does not conform to or conflict with what is described in this specification, the description, definition, and/or use of the term in this specification controls.
Finally, it should be understood that the embodiments described in this specification are merely illustrative of the principles of the embodiments of this specification. Other variations are possible within the scope of this description. Thus, by way of example, and not limitation, alternative configurations of embodiments of the present specification may be considered as consistent with the teachings of the present specification. Accordingly, the embodiments of the present specification are not limited to only the embodiments explicitly described and depicted in the present specification.

Claims (10)

1. A method of displaying a medical image, the method comprising:
acquiring a first primitive of a first medical image and determining a first anchor point for defining the first primitive;
acquiring user operation information and identification points in the first medical image, and determining a second medical image based on the user operation information and a first screen coordinate position of the identification points in a screen coordinate system;
determining a second anchor point corresponding to the second medical image based on the binding relation, and acquiring a second primitive based on the second anchor point; the binding relation binds the relation between the anchor point and the corresponding first pixel point in the medical image.
2. The method of claim 1, wherein the determining a second medical image based on the user operation information and a first screen coordinate position of the identification point in a screen coordinate system comprises:
Updating a first screen coordinate position of the identification point in the screen coordinate system based on the user operation information, and acquiring a second screen coordinate position of the identification point in the screen coordinate system;
and determining the second medical image according to the second screen coordinate position of the identification point in the screen coordinate system.
3. The method of claim 1, wherein the first screen coordinate position of the identification point in the screen coordinate system is determined by comprising:
determining a first screen coordinate position of the identification point in a screen coordinate system according to the first image coordinate position of the identification point in the image coordinate system and a first mapping relation;
the first mapping relation maps the screen coordinate position of the screen coordinate system and the image coordinate position of the image coordinate system.
4. The method of claim 2, wherein the determining the second medical image from the second screen coordinate position of the identification point in the screen coordinate system comprises:
acquiring a second image coordinate position of the identification point in the image coordinate system based on a second screen coordinate position of the identification point in the screen coordinate system and a first mapping relation;
Updating a fourth image coordinate position of other pixel points in the first medical image in the image coordinate system based on a second image coordinate position of the identification point in the image coordinate system and a second mapping relation, so as to acquire the second medical image;
the second mapping relation maps the image coordinate position of the identification point in the image coordinate system with the fourth image coordinate position of other pixel points in the medical image in the image coordinate system.
5. The method of claim 1, wherein the method further comprises:
medical image data is acquired, the first medical image and the second medical image being determined based on the same medical image data.
6. The method of claim 1, wherein the determining a second anchor point corresponding to the second medical image based on the binding relationship comprises:
determining the first pixel point in the second medical image based on a third image coordinate position of the first pixel point in the image coordinate system;
and determining a second pixel point corresponding to the first pixel point as the second anchor point based on the binding relation.
7. The method of claim 1, wherein the method further comprises:
acquiring preference operation data of the user, and determining the number of buffer areas based on the preference operation data;
and when the user rollback operation times meet a preset threshold value, adding a buffer area for recording the medical image corresponding to the last operation.
8. A display system for medical images, the system comprising:
a first anchor point determining module, configured to acquire a first primitive of a first medical image, and determine a first anchor point for defining the first primitive;
the second medical image acquisition module is used for acquiring user operation information and identification points in the first medical image, and determining a second medical image based on the user operation information and a first screen coordinate position of the identification points in a screen coordinate system;
the second primitive acquisition module is used for determining a second anchor point corresponding to the second medical image based on the binding relation and acquiring a second primitive based on the second anchor point; the binding relation binds the relation between the anchor point and the corresponding first pixel point in the medical image.
9. A display device for medical images, the device comprising:
at least one storage medium storing computer instructions;
at least one processor executing the computer instructions to implement the method of any one of claims 1-7.
10. A computer-readable storage medium storing computer instructions that, when read by a computer, perform the method of displaying a medical image according to any one of claims 1 to 7.
CN202311100690.3A 2023-08-29 2023-08-29 Medical image display method, system, device and storage medium Pending CN116913480A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311100690.3A CN116913480A (en) 2023-08-29 2023-08-29 Medical image display method, system, device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311100690.3A CN116913480A (en) 2023-08-29 2023-08-29 Medical image display method, system, device and storage medium

Publications (1)

Publication Number Publication Date
CN116913480A true CN116913480A (en) 2023-10-20

Family

ID=88365230

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311100690.3A Pending CN116913480A (en) 2023-08-29 2023-08-29 Medical image display method, system, device and storage medium

Country Status (1)

Country Link
CN (1) CN116913480A (en)

Similar Documents

Publication Publication Date Title
US11657509B2 (en) Method for precisely and automatically positioning reference line for integrated images
WO2021238438A1 (en) Tumor image processing method and apparatus, electronic device, and storage medium
CN107886508B (en) Differential subtraction method and medical image processing method and system
US20230131722A1 (en) Systems and methods for image registration
US10580181B2 (en) Method and system for generating color medical image based on combined color table
CN111063424B (en) Intervertebral disc data processing method and device, electronic equipment and storage medium
CN109427059B (en) Planar visualization of anatomical structures
US8588490B2 (en) Image-based diagnosis assistance apparatus, its operation method and program
JP2007275595A (en) View creating method for reproducing tomographic image data
CN116503607B (en) CT image segmentation method and system based on deep learning
JP2004174241A (en) Image forming method
CN113223028A (en) Multi-modal liver tumor segmentation method based on MR and CT
US11189030B2 (en) Method and device for determining liver segments in a medical image
US11615267B2 (en) X-ray image synthesis from CT images for training nodule detection systems
CN108876783B (en) Image fusion method and system, medical equipment and image fusion terminal
CN116913480A (en) Medical image display method, system, device and storage medium
CN113888566B (en) Target contour curve determination method and device, electronic equipment and storage medium
US20210256741A1 (en) Region correction apparatus, region correction method, and region correction program
CN106296707A (en) Medical image processing method and device
CN112767314A (en) Medical image processing method, device, equipment and storage medium
EP3695380A1 (en) Hypersurface reconstruction of microscope view
US20230064516A1 (en) Method, device, and system for processing medical image
US9020231B2 (en) Method and apparatus for measuring captured object using brightness information and magnified image of captured image
US20240096086A1 (en) Information processing apparatus, information processing method, and information processing program
CN113538426A (en) Medical image processing method and device and focus positioning method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination