WO2022053049A1 - C形臂设备的动态透视方法、装置及系统 - Google Patents

C形臂设备的动态透视方法、装置及系统 Download PDF

Info

Publication number
WO2022053049A1
WO2022053049A1 PCT/CN2021/118006 CN2021118006W WO2022053049A1 WO 2022053049 A1 WO2022053049 A1 WO 2022053049A1 CN 2021118006 W CN2021118006 W CN 2021118006W WO 2022053049 A1 WO2022053049 A1 WO 2022053049A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
blood vessel
human body
imaging
parameter
Prior art date
Application number
PCT/CN2021/118006
Other languages
English (en)
French (fr)
Inventor
孙彪
向军
王振玮
王伟懿
张娜
冯娟
Original Assignee
上海联影医疗科技股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from CN202010957835.1A external-priority patent/CN111973209A/zh
Priority claimed from CN202011017481.9A external-priority patent/CN112057094A/zh
Priority claimed from CN202011019279.XA external-priority patent/CN112150600B/zh
Priority claimed from CN202011019324.1A external-priority patent/CN112150543A/zh
Application filed by 上海联影医疗科技股份有限公司 filed Critical 上海联影医疗科技股份有限公司
Priority to EP21866101.5A priority Critical patent/EP4201331A4/en
Publication of WO2022053049A1 publication Critical patent/WO2022053049A1/zh
Priority to US18/182,286 priority patent/US20230230243A1/en

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
    • A61B6/48Diagnostic techniques
    • A61B6/486Diagnostic techniques involving generating temporal series of image data
    • A61B6/487Diagnostic techniques involving generating temporal series of image data involving fluoroscopy
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
    • A61B6/52Devices using data or image processing specially adapted for radiation diagnosis
    • A61B6/5211Devices using data or image processing specially adapted for radiation diagnosis involving processing of medical diagnostic data
    • A61B6/5229Devices using data or image processing specially adapted for radiation diagnosis involving processing of medical diagnostic data combining image data of a patient, e.g. combining a functional image with an anatomical image
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
    • A61B6/02Arrangements for diagnosis sequentially in different planes; Stereoscopic radiation diagnosis
    • A61B6/025Tomosynthesis
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
    • A61B6/02Arrangements for diagnosis sequentially in different planes; Stereoscopic radiation diagnosis
    • A61B6/03Computed tomography [CT]
    • A61B6/032Transmission computed tomography [CT]
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
    • A61B6/02Arrangements for diagnosis sequentially in different planes; Stereoscopic radiation diagnosis
    • A61B6/03Computed tomography [CT]
    • A61B6/037Emission tomography
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
    • A61B6/44Constructional features of apparatus for radiation diagnosis
    • A61B6/4429Constructional features of apparatus for radiation diagnosis related to the mounting of source units and detector units
    • A61B6/4435Constructional features of apparatus for radiation diagnosis related to the mounting of source units and detector units the source unit and the detector unit being coupled by a rigid structure
    • A61B6/4441Constructional features of apparatus for radiation diagnosis related to the mounting of source units and detector units the source unit and the detector unit being coupled by a rigid structure the rigid structure being a C-arm or U-arm
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
    • A61B6/48Diagnostic techniques
    • A61B6/481Diagnostic techniques involving the use of contrast agents
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
    • A61B6/48Diagnostic techniques
    • A61B6/482Diagnostic techniques involving multiple energy imaging
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
    • A61B6/50Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment specially adapted for specific body parts; specially adapted for specific clinical applications
    • A61B6/502Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment specially adapted for specific body parts; specially adapted for specific clinical applications for diagnosis of breast, i.e. mammography
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
    • A61B6/50Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment specially adapted for specific body parts; specially adapted for specific clinical applications
    • A61B6/504Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment specially adapted for specific body parts; specially adapted for specific clinical applications for diagnosis of blood vessels, e.g. by angiography
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10116X-ray image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing

Definitions

  • the present application relates to the technical field of image processing, and more particularly, to a dynamic perspective method, device and system for a C-arm device.
  • Radiation devices eg, DSA devices, DR devices, X-ray machines, mammography machines, etc.
  • DSA devices digital versatile systems
  • DR devices digital versatile systems
  • X-ray machines e.g., X-ray machines
  • mammography machines e.g., X-ray machines
  • radiation e.g, X-rays
  • medical staff can dynamically understand the changes and movements of the subject's lesions and/or various tissues and organs, they can obtain diagnostic results or perform clinical operations more quickly and accurately.
  • the present application discloses a dynamic fluoroscopy method, device and system for a C-arm device, so that medical staff can dynamically understand the changes and movements of the subject's lesions and/or various tissues and organs, so as to obtain the results more quickly and accurately. Diagnosis results or clinical operations.
  • an embodiment of the present application provides a dynamic perspective method for a C-arm device, including: photographing a subject in one photographing cycle, and acquiring a ray source to irradiate the subject with a first energy during the photographing process and acquiring second fluoroscopy data of the ray source irradiating the subject at a second energy different from the first energy; and shooting the subject in a plurality of consecutive shooting cycles ; displaying a dynamic image of the subject according to the first perspective data and the second perspective data acquired in each of the plurality of continuous shooting cycles.
  • an embodiment of the present application provides an imaging positioning method for a medical imaging device, the method comprising:
  • the internal image of the human body corresponding to the first position information corresponds to a target shooting position, determining a target imaging position corresponding to the medical imaging device according to the first position information;
  • the image inside the human body corresponding to the first position information does not correspond to the target shooting position, continue to acquire second position information that is different from the first position information, and according to the second position information
  • the image determines the target imaging location corresponding to the medical imaging device.
  • the internal image of the human body corresponding to the first position information input by the user is determined based on the virtual human body model, and the target imaging position corresponding to the medical imaging device is obtained by judging whether the internal image of the human body corresponds to the target shooting position, which solves the problem of imaging.
  • the problem of radiation damage to the human body during the operation allows the user to arbitrarily set parameters during the imaging positioning process without causing any damage to the human body, thereby ensuring the accuracy of the positioning.
  • an embodiment of the present application provides an image display method, the method comprising:
  • a second field of view parameter is determined based on the received parameter adjustment instruction and the first field of view parameter; wherein the parameter adjustment instruction includes a scaling adjustment ratio;
  • a second blood vessel image is determined based on the second visual field parameter, and the second blood vessel image and the first blood vessel image are displayed synchronously.
  • the second blood vessel image is determined based on the received parameter adjustment instruction, and the second blood vessel image is displayed synchronously with the first blood vessel image, which solves the problem of scanning images of different fields of view.
  • the problem of repeated switching of displays reduces the number of repeated imaging and the amount of radiation, improves the service life of angiography equipment, improves the diagnostic efficiency of angiography, and reduces the error of angiography diagnostic results caused by repeated switching.
  • an embodiment of the present application provides a method for generating a volumetric reconstruction image, including:
  • the projection data is reconstructed according to the desired reconstruction direction to generate a target volume reconstruction image.
  • a target volume coordinate system is constructed based on the initial volume coordinate system and the desired reconstruction direction by acquiring projection data at each scanning angle, and in the target volume coordinate system, according to the desired reconstruction direction
  • the projection data is reconstructed to generate a target volume reconstructed image.
  • an embodiment of the present application provides a dynamic perspective system for a C-arm device, including a photographing module and a display module;
  • the photographing module is used for photographing a subject in one photographing cycle, and acquiring information during the photographing process first fluoroscopic data of the radiation source irradiating the subject at a first energy, and acquiring second fluoroscopic data of the radiation source irradiating the subject at a second energy different from the first energy;
  • the shooting module is also used for shooting the subject in a plurality of continuous shooting periods;
  • the display module is used for the first perspective data obtained in each shooting period of the multiple continuous shooting periods and the The second perspective data displays a dynamic image of the subject.
  • an imaging positioning device for a medical imaging device comprising:
  • a virtual human body model acquiring module used for acquiring a virtual human body model corresponding to the imaging object, and acquiring the first position information corresponding to the user's operation instruction;
  • an internal human body image display module configured to determine, based on the first position information, an internal human body image corresponding to the first position information in the virtual human body model, and display the internal human body image;
  • a first target imaging position determination module configured to determine a target imaging position corresponding to the medical imaging device according to the first position information if the internal human body image corresponding to the first position information corresponds to a target shooting position
  • a second target imaging position determination module configured to continue to acquire second position information different from the first position information if the internal image of the human body corresponding to the first position information does not correspond to the target shooting position, and according to The internal image of the human body corresponding to the second position information determines a target imaging position corresponding to the medical imaging device.
  • an embodiment of the present application provides an image display device, the device comprising:
  • a first blood vessel image display module configured to acquire a first blood vessel image of an imaging object collected by an angiography device based on a first visual field parameter, and display the first blood vessel image;
  • a second field of view parameter determination module configured to determine a second field of view parameter based on the received parameter adjustment instruction and the first field of view parameter; wherein the parameter adjustment instruction includes a scaling adjustment ratio;
  • a second blood vessel image display module configured to determine a second blood vessel image based on the second visual field parameter, and display the second blood vessel image and the first blood vessel image synchronously.
  • the embodiments of the present application provide an angiography device, the angiography device includes an imaging component, at least one display device, and a controller;
  • the imaging component is configured to acquire the first blood vessel image based on the first visual field parameter
  • the display device for displaying the first blood vessel image and the second blood vessel image
  • the controller includes one or more processors and a memory storing one or more programs that, when executed by the one or more processors, cause the one or more processes
  • the device implements any of the above-mentioned image display methods.
  • an embodiment of the present application further provides a volume reconstruction image generation device, including:
  • the projection data acquisition module is used to acquire the projection data under each scanning angle
  • the target volume coordinate system generation module is used to construct the target volume coordinate system based on the initial volume coordinate system and the desired reconstruction direction;
  • a target volume reconstructed image generation module is configured to reconstruct the projection data according to the desired reconstruction direction in the target volume coordinate system to generate a target volume reconstructed image.
  • an embodiment of the present application further provides a volume reconstruction image generation system, including: a control device and an image acquisition device;
  • control device includes a memory, a processor, and a computer program stored in the memory and executable on the processor, and the processor implements the volume described in any one of the first aspects when the processor executes the computer program Reconstructed image generation method;
  • the image acquisition device is used for scanning the scanning object under each scanning angle, so as to obtain projection data under each scanning angle.
  • an embodiment of the present application provides a dynamic perspective device for a C-arm device, the device includes at least one processor and at least one storage device, the storage device is used for storing instructions, when the at least one processing When the computer executes the instruction, the dynamic perspective method described in any embodiment of the present application is implemented.
  • an embodiment of the present application provides a C-arm imaging system, including a ray source, a detector, a memory, and a display, where the ray source includes a bulb, a high-voltage generator, and a high-voltage control module; wherein: the high-voltage The control module controls the high-voltage generator to switch back and forth between the first energy and the second energy; the radiation source, driven by the high-voltage generator, emits radiation to the subject within one shooting cycle, and the detection
  • the device acquires the first fluoroscopic data of the subject under the irradiation of the first energy rays and the second fluoroscopic data under the irradiation of the second energy rays;
  • the memory stores the data obtained in each shooting period of a plurality of consecutive shooting periods the first perspective data and the second perspective data; the memory also stores an image processing unit, and the image processing unit subtracts the first perspective data and the second perspective data in each shooting cycle to obtain all the The image of the subject in each shooting period is further obtained
  • an embodiment of the present application provides a medical imaging device, where the medical imaging device includes:
  • processors one or more processors
  • memory for storing one or more programs
  • the one or more processors When the one or more programs are executed by the one or more processors, the one or more processors are caused to implement the imaging positioning method of any one of the medical imaging devices mentioned above.
  • an embodiment of the present application provides a computer-readable storage medium, where the storage medium stores computer instructions, and after the computer reads the computer instructions in the storage medium, the computer executes any one of the The dynamic perspective method described in the embodiment.
  • an embodiment of the present application further provides a storage medium containing computer-executable instructions, when executed by a computer processor, the computer-executable instructions are used to perform any one of the medical imaging mentioned above The imaging positioning method of the device.
  • an embodiment of the present application further provides a storage medium containing computer-executable instructions, where the computer-executable instructions are used to perform any of the above-mentioned image display when executed by a computer processor method.
  • embodiments of the present application further provide a storage medium containing computer-executable instructions, the computer-executable instructions, when executed by a computer processor, implement the volume reconstruction according to any one of the first aspect Image generation method.
  • FIG. 1 is a schematic diagram of an application scenario of a dynamic perspective system of a C-arm device provided by an embodiment of the present application
  • FIG. 2 is an exemplary flowchart of a dynamic perspective method for a C-arm device provided by an embodiment of the present application
  • FIG. 3 is a flowchart of an imaging positioning method for a medical imaging device provided by an embodiment of the present application.
  • FIG. 4 is a flowchart of an imaging positioning method of a medical imaging device provided by an embodiment of the present application.
  • FIG. 5 is a schematic diagram of an interactive interface provided by an embodiment of the present application.
  • FIG. 6 is a flowchart of an imaging positioning method for a medical imaging device provided by an embodiment of the present application.
  • FIG. 7 is a schematic diagram of an imaging scene of a virtual human body model provided by an embodiment of the present application.
  • FIG. 8 is a flowchart of an image display method provided by an embodiment of the present application.
  • FIG. 9 is a flowchart of an image display method provided by an embodiment of the present application.
  • FIG. 10 is a schematic flowchart of a method for generating a volume reconstruction image provided by an embodiment of the present application
  • FIG. 11 is a schematic diagram of the definition of an initial volume coordinate system provided by an embodiment of the present application.
  • FIG. 12 is a schematic diagram of the definition of a target volume coordinate system provided by an embodiment of the present application.
  • FIG. 13 is a schematic flowchart of a method for generating a volume reconstruction image provided by an embodiment of the present application
  • FIG. 14 is an exemplary block diagram of the dynamic perspective system of the C-arm device provided by the embodiment of the present application.
  • 15 is a schematic diagram of an imaging positioning device of a medical imaging device provided by an embodiment of the present application.
  • 16 is a schematic structural diagram of a medical imaging device provided by an embodiment of the present application.
  • FIG. 17 is a schematic diagram of an image display device provided by an embodiment of the present application.
  • FIG. 18 is a schematic structural diagram of an angiography device provided by an embodiment of the present application.
  • FIG. 19 is a schematic structural diagram of a volume reconstruction image generation device provided by an embodiment of the application.
  • FIG. 20 is a schematic structural diagram of a volume reconstruction image generation system provided by an embodiment of the application.
  • FIG. 21 is a schematic structural diagram of a control device provided by an embodiment of the present application.
  • system means for distinguishing different components, elements, parts, parts or assemblies at different levels.
  • device means for converting signals into signals.
  • unit means for converting signals into signals.
  • module means for converting signals into signals.
  • first and second are only used for descriptive purposes, and should not be construed as indicating or implying relative importance or implying the number of indicated technical features. Thus, a feature delimited with “first”, “second” may expressly or implicitly include at least one of that feature.
  • plurality means at least two, such as two, three, etc., unless expressly and specifically defined otherwise.
  • the terms “installed”, “connected”, “connected”, “fixed” and other terms should be understood in a broad sense, for example, it may be a fixed connection or a detachable connection , or integrated; it can be a mechanical connection or an electrical connection; it can be directly connected or indirectly connected through an intermediate medium, it can be the internal connection of two elements or the interaction relationship between the two elements, unless otherwise specified limit.
  • installed may be a fixed connection or a detachable connection , or integrated; it can be a mechanical connection or an electrical connection; it can be directly connected or indirectly connected through an intermediate medium, it can be the internal connection of two elements or the interaction relationship between the two elements, unless otherwise specified limit.
  • a first feature "on” or “under” a second feature may be in direct contact with the first and second features, or the first and second features indirectly through an intermediary touch.
  • the first feature being “above”, “over” and “above” the second feature may mean that the first feature is directly above or obliquely above the second feature, or simply means that the first feature is level higher than the second feature.
  • the first feature being “below”, “below” and “below” the second feature may mean that the first feature is directly below or obliquely below the second feature, or simply means that the first feature has a lower level than the second feature.
  • FIG. 1 is a schematic diagram of an application scenario of a dynamic perspective system of a C-arm device according to some embodiments of the present application.
  • the dynamic perspective system 100 may include a perspective device 110 , a network 120 , at least one terminal 130 , a processing device 140 and a storage device 150 .
  • the various components in the system 100 may be connected to each other through the network 120 .
  • the see-through device 110 and the at least one terminal 130 may be connected or communicated through the network 120 .
  • the fluoroscopy apparatus 110 may include DSA (Digital Subtraction Angiography), Digital Radiography (DR), Computed Radiography (CR), Digital Fluorography (DF) ), CT scanners, MRI scanners, mammography machines, C-arm equipment, etc.
  • the fluoroscopy apparatus 110 may include a gantry, a detector, a detection area, a scan bed, and a radiation source.
  • the gantry can be used to support the detector and radiation source.
  • the scanning bed is used to place the subject for scanning.
  • Subjects may include patients, phantoms, or other objects to be scanned.
  • the radiation source may emit X-rays to the subject to illuminate the subject.
  • a detector may be used to receive X-rays.
  • the fluoroscopy device 110 may acquire fluoroscopic data in the multiple shooting cycles to generate (or reconstruct) dynamic images of the subject in the multiple shooting cycles.
  • Network 120 may include any suitable network capable of facilitating the exchange of information and/or data for dynamic perspective system 100 .
  • at least one component of dynamic fluoroscopy system 100 eg, fluoroscopy device 110 , processing device 140 , storage device 150 , at least one terminal 130
  • the processing device 140 may obtain an image of the subject from the fluoroscopy device 110 via the network 120 .
  • the network 120 may include a public network (eg, the Internet), a private network (eg, a local area network (LAN)), a wired network, a wireless network (eg, an 802.11 network, a Wi-Fi network), a frame relay network, a virtual private network (VPN), satellite network, telephone network, router, hub, switch, etc. or any combination thereof.
  • the network 120 may include a wired network, a wired network, a fiber optic network, a telecommunication network, an intranet, a wireless local area network (WLAN), a metropolitan area network (MAN), a public switched telephone network (PSTN), a BluetoothTM network, a ZigBeeTM network , Near Field Communication (NFC) networks, etc., or any combination thereof.
  • network 120 may include at least one network access point.
  • network 120 may include wired and/or wireless network access points, such as base stations and/or Internet exchange points, through which at least one component of dynamic perspective system 100 may connect to network 120 to exchange data and/or information.
  • At least one terminal 130 may communicate and/or connect with the fluoroscopy device 110 , the processing device 140 and/or the storage device 150 .
  • the first perspective data and the second perspective data acquired by the processing device 140 may be stored in the storage device 150 .
  • at least one terminal 130 may include a mobile device 131, a tablet computer 132, a laptop computer 133, the like, or any combination thereof.
  • mobile device 131 may include a mobile control stick, a personal digital assistant (PDA), a smartphone, etc., or any combination thereof.
  • PDA personal digital assistant
  • at least one terminal 130 may include a display, and the display may be used to display information related to the dynamic perspective process (eg, a dynamic image of the subject).
  • At least one terminal 130 may include input devices, output devices, and the like.
  • the input device may select keyboard input, touch screen (eg, with tactile or tactile feedback) input, voice input, eye tracking input, gesture tracking input, brain monitoring system input, image input, video input, or any other similar input mechanism.
  • Input information received through the input device may be transmitted, eg, via a bus, to the processing device 140 for further processing.
  • Other types of input devices may include cursor control devices such as a mouse, trackball, or cursor direction keys, among others.
  • an operator eg, a technician or doctor
  • Output devices may include displays, speakers, printers, etc., or any combination thereof.
  • the output device may be used to output a dynamic image or the like determined by the processing device 140 .
  • at least one terminal 130 may be part of processing device 140 .
  • the processing device 140 may process data and/or information obtained from the fluoroscopy device 110 , the storage device 150 , the at least one terminal 130 , or other components of the dynamic fluoroscopy system 100 .
  • the processing device 140 may obtain the fluoroscopic data of the subject from the fluoroscopic device 110 .
  • processing device 140 may be a single server or group of servers. Server groups can be centralized or distributed.
  • processing device 140 may be local or remote.
  • processing device 140 may access information and/or data from perspective device 110 , storage device 150 , and/or at least one terminal 130 via network 120 .
  • processing device 140 may be directly connected to fluoroscopy device 110, at least one terminal 130, and/or storage device 150 to access information and/or data.
  • processing device 140 may be implemented on a cloud platform.
  • cloud platforms may include private clouds, public clouds, hybrid clouds, community clouds, distributed clouds, inter-cloud clouds, multi-clouds, etc., or any combination thereof.
  • Storage device 150 may store data, instructions, and/or any other information. For example, historical filming protocols, etc. In some embodiments, storage device 150 may store data obtained from fluoroscopy device 110 , at least one terminal 130 , and/or processing device 140 . In some embodiments, storage device 150 may store data and/or instructions used by processing device 140 to perform or use to perform the example methods described in this application. In some embodiments, storage device 150 may include mass storage, removable storage, volatile read-write memory, read-only memory (ROM), the like, or any combination thereof. In some embodiments, storage device 150 may be implemented on a cloud platform.
  • storage device 150 may be connected to network 120 to communicate with at least one other component in dynamic fluoroscopy system 100 (eg, processing device 140, at least one terminal 130). At least one component in dynamic perspective system 100 may access data (eg, perspective data) stored in storage device 150 via network 120 . In some embodiments, storage device 150 may be part of processing device 140 .
  • the storage device 150 may be a data storage device 150 including a cloud computing platform, such as a public cloud, a private cloud, a community and hybrid cloud, and the like.
  • a cloud computing platform such as a public cloud, a private cloud, a community and hybrid cloud, and the like.
  • the present application also relates to a C-arm imaging system.
  • the C-arm imaging system may include a radiation source, a detector, a memory and a display, and the radiation source includes a bulb, a high-voltage generator and a high-voltage control module.
  • the high-voltage control module can control the high-voltage generator to switch back and forth between the first energy and the second energy;
  • the radiation source can be driven by the high-voltage generator during one shooting cycle
  • the main body emits rays
  • the detector can acquire the first fluoroscopic data of the subject under the irradiation of the first energy ray and the second fluoroscopic data under the irradiation of the second energy ray;
  • the memory can store a plurality of continuous shots The first fluoroscopic data and the second fluoroscopic data obtained in each shooting cycle of the cycle;
  • the memory also stores an image processing unit, and the image processing unit analyzes the first fluoroscopic data and the second fluoroscopic data in each shooting cycle.
  • the second perspective data subtraction process obtains an image of the subject in each shooting period and then obtains dynamic images in multiple shooting periods; the display can be used to display the dynamic images of the subject in multiple shooting periods.
  • the medical staff needs to take pictures of the subject frequently to understand the physical condition of the subject.
  • a picture can only reflect the condition of the lesions and/or various tissues and organs of the subject at a certain moment, which often cannot satisfy the medical staff. demand.
  • medical staff need to know the whole or part of the subject (for example, bone, soft tissue, etc.) or the lesion and movement of a certain lesion in real time. Therefore, it is necessary to use the dynamic fluoroscopy function.
  • the fluoroscopic device 110 continuously shoots the subject in one or more shooting cycles to obtain dynamic fluoroscopic images.
  • the fluoroscopic device with fluoroscopic function can usually only irradiate the subject under one energy to obtain fluoroscopic images, resulting in that the photoelectric absorption effect and Compton scattering effect cannot be effectively used to obtain images required by medical staff, The efficiency of diagnosis and clinical operation of medical staff is reduced.
  • some embodiments of the present application provide a dynamic fluoroscopy method, so that the ray source can irradiate the subject under different energies in turn, so as to obtain the fluoroscopy data of the subject under different energies irradiation, and then based on the photoelectric absorption effect and Compton
  • the scattering effect can obtain the dynamic fluoroscopic image of the subject required by the medical staff, which can effectively improve the clinical operation efficiency of the medical staff, and can better assist the medical staff in diagnosis/treatment.
  • FIG. 2 is an exemplary flowchart of a dynamic perspective method according to some embodiments of the present application.
  • the dynamic perspective method 200 may be performed by the dynamic perspective system 100 (eg, the processing device 140 ).
  • the dynamic perspective method 200 may be stored in a storage device (eg, the storage device 150 ) in the form of programs or instructions, and the dynamic perspective method 200 may be implemented when the dynamic perspective system 100 (eg, the processing device 140 ) executes the program or instructions.
  • Step 210 Shoot the subject in one shooting cycle, acquire first perspective data of the ray source irradiating the subject under the first energy during the shooting process, and acquire the ray source at a different time from the first ray source.
  • a second perspective data of the subject is illuminated at a second energy of energy.
  • step 210 may be performed by the capture module 310 .
  • the subject may refer to an object that is irradiated by a radiation source at a certain position under the fluoroscopy device 110 .
  • the subject may include a subject, eg, a patient, examiner, etc., whose part (eg, head, chest) is photographed.
  • the subject may also be a specific organ, tissue, site, or the like.
  • the photographing period can be understood as a period of time during which the radiation source will photograph the subject with rays of the first energy and the second energy, respectively.
  • the ray source photographing the subject at the first energy and the ray source photographing the subject at the second energy may be continuous, that is, the ray source is photographed at the first energy immediately after the ray source is photographed at the second energy. shooting.
  • photographing (ie, irradiating) the subject with the ray source at the first energy may also be referred to as the first energy photographing
  • the ray source photographing (ie, irradiating) the subject at the second energy may also be referred to as the second energy photographing.
  • the high-voltage generator can be controlled by the high-voltage control module to switch between the second energy and the first energy, so as to realize the switching of the first energy shooting and the second energy shooting.
  • the capture period may be 1/25 second, 1/50 second, or the like.
  • the fluoroscopy device 110 may include a plurality of radiation sources (eg, a first radiation source and a second radiation source), and the radiation energy emitted by each radiation source is different.
  • a first radiation source may emit a first radiation at a first energy
  • a second radiation source may emit a second radiation at a second energy, which may be lower or higher than the second energy.
  • the fluoroscopy device 110 may also have only one radiation source, and the radiation source may emit radiation with different energies under the control of the high-voltage control module.
  • the energy difference between the energy of the first energy and the energy of the second energy may be 5KeV ⁇ 120KeV. In some preferred embodiments, the energy difference between the first energy and the second energy may be 10KeV ⁇ 90KeV. In some preferred embodiments, the energy difference between the first energy and the second energy may be 20KeV ⁇ 100KeV. In some embodiments, the available energy range of the first energy and the available energy range of the second energy may be partially overlapping or spaced apart. For example, the available energy range of the first energy is 50KeV ⁇ 100KeV, and the available energy range of the second energy is 70KeV ⁇ 130KeV.
  • the available energy range of the first energy is 60KeV ⁇ 90KeV, and the available energy range of the second energy is 100KeV ⁇ 120KeV.
  • the first energy may be 70 KeV and the second energy may be 120 KeV.
  • the processing device 140 may obtain fluoroscopic data of a subject exposed to rays (or beams) of different energies. For example, the processing device 140 may acquire first fluoroscopic data of the subject under irradiation of rays of the first energy and second fluoroscopic data of the subject under irradiation of rays of the second energy.
  • the fluoroscopic data may refer to data detected by a detector of the fluoroscopic device 110 after the rays pass through the subject to be irradiated, and the memory of the fluoroscopic device 110 may store the data in each of a plurality of consecutive shooting cycles. Acquired first pivot data and second pivot data.
  • the processing device 140 or the image processing unit in the memory may perform processing (eg, subtraction processing) based on the fluoroscopic data to obtain images of multiple shooting periods, thereby obtaining dynamic images of multiple shooting periods.
  • the fluoroscopic data can also be understood as an image reconstructed based on the data detected by the detector.
  • the half-reconstruction method, the segmental reconstruction method and other methods may be used to reconstruct the image.
  • the processing device 140 may store the pivot data for subsequent processing of the pivot data to achieve dynamic pivot.
  • the dynamic fluoroscopy system 100 can firstly use the radiation source to irradiate the subject at the first energy to obtain first fluoroscopy data, and then use the radiation source to irradiate the subject at the second energy to obtain the second fluoroscopy data.
  • the imaging system 100 may firstly use the radiation source to irradiate the subject at the second energy, and then use the radiation source to irradiate the subject at the first energy.
  • the specific shooting steps can be selected according to the actual situation.
  • Step 220 Perform the photographing on the subject in a plurality of consecutive photographing cycles.
  • step 220 may be performed by the capture module 310 .
  • the multiple consecutive shooting periods may include a first shooting period, a second shooting period... an Nth shooting period, and the next shooting period is entered after each shooting period is completed.
  • a photographing cycle may take the start time of photographing (eg, the first energy photographing) as the start time of the cycle, and the finish time of the photographing (eg, the second energy photographing) as the end time of the cycle.
  • the next shooting cycle that is, the step of irradiating the subject with the radiation source at the first energy, can be performed directly.
  • a certain interval time may also be included before the first energy shooting starts and/or after the second energy shooting is completed.
  • a shooting cycle may take the start time of the first energy shooting as the cycle start time, and after the first energy shooting is completed, the second energy shooting can be performed after a first time period; After the shooting is completed, the shooting cycle can be ended after a second time period and the next shooting cycle can be performed.
  • the first time period and the second time period may be equal.
  • one shooting period is related to the frame rate. For example, for a DSA device, 50 frames of fluoroscopic images can be continuously shot in one second, where, here, two consecutive frames are one shooting period, ie, 0.04 seconds.
  • Step 230 Display a dynamic image of the subject according to the first perspective data and the second perspective data acquired in each of the multiple continuous shooting cycles.
  • step 230 may be performed by display module 320 .
  • the display module 320 may generate an image of the subject (eg, a target image) based on the first perspective data and the second perspective data acquired in the shooting cycle.
  • an image of the subject eg, a target image
  • Multiple target images of multiple consecutive shooting cycles can form a dynamic image of the subject.
  • the dynamic image of the subject can reflect the changes of the subject (such as lesions, tissues and/or organs, etc.) in multiple consecutive shooting cycles.
  • the categories of dynamic images (or target images) of the subject may include soft tissue images, bone images, and/or at least a combined image of soft tissue and bone.
  • a soft tissue image may refer to an image showing only a portion of the soft tissue.
  • a bone image may refer to an image showing only a portion of the bone.
  • a composite image may refer to displaying an image that includes at least a portion of soft tissue and at least a portion of bone.
  • the composite image may be composed of soft tissue images and bone images integrated according to certain weights.
  • the weight occupied by the soft tissue image and the bone image in the integrated image may be determined according to the wishes of the medical staff, for example, may be determined according to the information input by the medical staff through the input device.
  • the processing device 140 may utilize dual-energy subtraction based on the first perspective data and the second perspective data acquired in each of the plurality of consecutive shooting cycles.
  • Motion picture technology determines and displays a moving image of the subject.
  • the intensity of the photoelectric absorption effect is positively correlated with the relative atomic mass of the irradiated material (for example, the soft tissue of the subject, the bone part, etc.) .
  • the Compton scattering effect has nothing to do with the relative atomic mass of the irradiated subject, and is a function of the electron density of the subject's body tissues and organs, and mainly occurs in soft tissues.
  • Dual energy subtraction technology can determine the target image of the subject (such as soft tissue image, bone image or comprehensive image) by using the different ways of energy attenuation of X-ray photons between bone and soft tissue, as well as the difference in photoelectric absorption effect of substances with different atomic weights. This difference in attenuation and absorption is more obvious in X-ray beams with different energies, and the influence of the energy of X-ray beams on the intensity of Compton scattering effect is almost negligible.
  • the display module 320 can process the first fluoroscopic data and the second fluoroscopic data by using the dual-energy subtraction technology, and then obtain a soft tissue image, a bone image or a comprehensive image by selectively removing or partially removing the attenuation information of the bone or soft tissue. It is displayed in a display (eg, terminal 130).
  • the image processing unit in the memory may also process the first fluoroscopic data and the second fluoroscopic data.
  • the first perspective data may include a first projection image
  • the second perspective data may include a second projection image
  • the processing device 140 may determine the grayscale I1(x, y) of each pixel in the first projection image and the grayscale Ih(x, y) of each pixel in the second projected image. Based on the grayscale of each pixel in the first projected image, the grayscale of each pixel in the second projected image, and the subtraction parameter ⁇ , the processing device 140 may determine the grayscale of each pixel in the target image according to formula (1).
  • the subtraction parameter ⁇ can be set according to different parts of the subject.
  • the processing device 140 may further determine the first target image by performing superimposition processing on the first projection image and the second projection image; and then determine the second target image by performing color inversion processing on the first target image.
  • the first target image and the second target image may be different types of dynamic images, for example, the first target image may be a bone image, and the second target image may be a soft tissue image.
  • the processing device 140 may obtain an instruction input by a user (eg, a medical staff, a technician), and the instruction may reflect the user's selection of a dynamic image category to be displayed, and then determine and display the dynamic image of the subject based on the instruction.
  • the medical staff can select the type of dynamic images generated by the fluoroscopy device 110 according to actual needs. For example, when performing surgery/diagnosis on bone parts, the medical staff can choose to generate and display bone images; when performing surgery/diagnosis on soft tissue parts, the medical staff can choose to generate and display bone images. There is an option to generate and display soft tissue images; when both bone and soft tissue sites need to be viewed simultaneously, healthcare professionals can choose to generate and display a combined image.
  • the instructions may include selection instructions, voice instructions, or text instructions, among others.
  • the voice instruction may refer to the voice information input by the medical staff through the input device, for example, "display bone image”.
  • the text instruction may refer to text information input by the medical staff through the input device, for example, "display soft tissue image” is input on the input device.
  • the selection instruction may refer to an instruction or selection item displayed on the interface of the input device for the medical staff to select, for example, selection item 1: “display bone image", selection item 2: “display soft tissue image”, selection item 3: " Display composite image".
  • a medical device When a medical device is used to diagnose or treat a patient's tissue, it is important to ensure that the medical device can be positioned to the target tissue so that the pathological characteristics of the target tissue can be accurately observed or the treatment effect can be improved.
  • the method widely used in the prior art is to use X-rays to pass through the human body.
  • Different tissue parts have different densities, so the absorption degree of X-rays is also different.
  • tissue parts with different densities can be distinguished. open to reflect the internal organization information of the human body.
  • the damage of ionizing radiation to the human body is very extensive and difficult to predict.
  • DSA equipment Digital Subtraction Angiography Equipment
  • DSA equipment includes a C-shaped arm, a radiation source and a detector, wherein the radiation source and the detector are installed at both ends of the C-shaped arm, and the C-shaped arm can be supported by a movable frame, and the frame can be suspended It can also be floor-standing.
  • the rack may also be a robotic rack.
  • the imaging object is lying on the hospital bed, and the imaging object is located between the radiation source and the detector.
  • the DSA device performs perspective on the imaging object to obtain multiple frames of internal images of the human body, and then assists doctors in operations, such as surgery and insertion. Guide wire, etc.
  • FIG. 3 is a flowchart of an imaging positioning method of a medical imaging device provided in Embodiment 1 of the present application. This embodiment can be applied to the situation of locating a target site. The method can be performed by an imaging positioning method, and the device can use implemented in software and/or hardware.
  • acquiring a virtual human body model corresponding to the imaging object includes: selecting a virtual human body model corresponding to the height data according to the acquired height data corresponding to the imaging object; wherein the virtual human body model includes Human Body Model and Human Internal Model.
  • the virtual human body model consists of two parts, one part is the human body shape model related to the height data, and the other part is the human body internal model corresponding to the human body shape model.
  • the virtual human body model may be a two-dimensional model or a three-dimensional model.
  • acquiring the height data corresponding to the imaging object may be receiving the height data corresponding to the imaging object input by the user, or acquiring the historical medical image of the imaging object, which is determined based on the image size between the key points in the historical medical image.
  • Height data wherein, for example, the key points may include boundary points or joint points.
  • the height data is matched with at least one stored height interval data, and the virtual human body model corresponding to the successfully matched height interval data is used as the virtual human body model corresponding to the height data.
  • the height interval data may be 1.0-1.1m, 1.1-1.2m, 1.2-1.3m, 1.3-1.4m, 1.4-1.5m, 1.5-1.6m, 1.6-1.7m, 1.7-1.8m And 1.9-2.0m, etc.
  • each height interval data corresponds to a human body model.
  • the average height corresponding to the height interval data is used as the height of the human body shape model corresponding to the height interval data.
  • the height of the human body model corresponding to the 1.0-1.1m height interval data may be 1.05m.
  • the span of the height interval data may be smaller or larger, and accordingly, the number of reference body models may be larger or smaller.
  • the height interval data of 2.0-2.1m can also be added on the basis of the above-mentioned height interval data, or the height interval data of 1.0-1.1m can be deleted.
  • the internal model of the human body includes at least one of a blood vessel model, an organ model, a bone model and a muscle model.
  • the human body shape model and the human body internal model are stored correspondingly.
  • height interval data are 1.0-1.1m, 1.1-1.2m, 1.2-1.3m, 1.3-1.4m, 1.4-1.5m, 1.5-1.6m, 1.6-1.7m, 1.7-1.8m and 1.9 -2.0m, each height interval data corresponds to an internal model of the human body.
  • a virtual human body model matching the height data is determined based on the virtual human body models corresponding to the two height interval data adjacent to the height data.
  • the height data is 1.45m
  • the stored height interval data are 1.0-1.1m, 1.1-1.2m, 1.2-1.3m, 1.3-1.4m, 1.5-1.6m, 1.6-1.7m, 1.7- 1.8m and 1.9-2.0m
  • the virtual human body model corresponding to the 1.4-1.5m height interval data is estimated based on the virtual human body model corresponding to 1.3-1.4m and the virtual human body model corresponding to 1.5-1.6m
  • the estimated The virtual mannequin is used as a virtual mannequin that matches the height data.
  • the estimation method may be a method such as a difference value or an average value, and the specific estimation method is not limited here.
  • the advantage of this setting is to improve the matching degree between the acquired virtual human body model and the imaging object as much as possible, thereby reducing the error between the subsequent internal image of the human body and the real tissue image.
  • the first location information includes first model information corresponding to a virtual human body model or first device information corresponding to a medical imaging device, wherein there is an association between the first model information and the first device information relation. Specifically, based on the relationship between the two, when the first model information changes, the first device information also changes; conversely, when the first device information changes, the first model information also changes.
  • the association relationship includes a position association relationship.
  • the method further includes: converting the relative position relationship between the medical imaging device and the imaging object into a position association relationship between the medical imaging device and the virtual human body model; wherein the position association The relationship is used to characterize the relationship of the position parameter between the first model information and the first device information.
  • the medical imaging device is matched with the imaging object through a preset reference point or a preset reference line, and a relative positional relationship between the two is established.
  • the preset reference point or the preset reference line may be a preset reference point or reference line on the treatment couch.
  • the preset reference line may be a reference line aligned with the apex of the head of the imaging subject on the treatment couch.
  • the first model information includes model position parameters corresponding to the virtual human body model
  • the first device information includes device position parameters corresponding to the medical imaging device.
  • the model position parameter may be the center point position and the model angle corresponding to the internal image of the human body
  • the device position parameter may be the center position of the radiation source and the angle of the axis between the radiation source and the detector relative to the treatment couch.
  • the association relationship further includes a view association relationship
  • the method further includes: acquiring a view association relationship between the first model information and the first device information; wherein the view association relationship is used to represent the first The relationship of the field of view parameter between the model information and the first device information.
  • the first model information includes a model field of view parameter corresponding to the virtual human body model
  • the first device information includes a device field of view parameter corresponding to the medical imaging device.
  • the model field of view parameter may be an image size corresponding to an internal image of the human body
  • the device field of view parameter may be at least one of magnification, source-image distance, and source-object distance.
  • a mapping list may be established in advance to describe the visual field association relationship between the first model information and the first device information.
  • the internal image of the human body is a virtual image corresponding to the virtual human body model and the first position information.
  • the virtual human body model is a three-dimensional model
  • the internal image of the human body may be a three-dimensional image, or may be a two-dimensional projection image corresponding to the angle information in the first position information.
  • the internal human body image includes a model image corresponding to at least one internal human body model in the virtual human body model, wherein the internal human body model includes at least one of a blood vessel model, an organ model, a bone model, and a muscle model kind.
  • the user may select an internal human body model corresponding to the internal human body image on the interactive interface. If the user selects the blood vessel model before imaging the blood vessel, the internal image of the human body only displays the model image of the blood vessel model. In another embodiment, the user can select the transparency of the internal model of the human body on the interactive interface.
  • the internal image of the human body only displays the model image of the blood vessel model.
  • the advantage of this setting is that the visual disturbance to the user caused by the models other than the target model is reduced as much as possible, thereby improving the positioning accuracy of the subsequent internal images of the human body.
  • the target shooting position may be the shooting position preset by the user according to the imaging plan before performing the imaging positioning. Location of organs such as the small intestine, esophagus, and throat.
  • the specific setting content of the target shooting position is not limited here. Exemplarily, if the internal human body image includes an image corresponding to the target shooting position, it is considered that the internal human body image corresponds to the target shooting position.
  • the second location information is the second model information; when the first location information is the first device information, the second location information is the second device information.
  • the internal image of the human body corresponding to the first position information input by the user is determined based on the virtual human body model, and the target imaging position corresponding to the medical imaging device is obtained by judging whether the internal image of the human body corresponds to the target shooting position, It solves the problem of excessive radiation to the human body during the imaging operation, so that the user can arbitrarily set parameters in the process of imaging positioning without causing any damage to the human body, thereby ensuring the accuracy of positioning.
  • FIG. 4 is a flowchart of an imaging positioning method of a medical imaging device provided by the present application, and the technical solution of this embodiment is a further refinement on the basis of the foregoing embodiment.
  • the acquiring the first location information corresponding to the user operation instruction includes: when the first location information is the first model information, displaying the virtual human body model on the interactive interface, and displaying the virtual human body model on the interactive interface.
  • the first model information corresponding to the user operation instruction is displayed on the virtual human body model.
  • S420 Display the virtual human body model on the interactive interface, and display the first model information corresponding to the user operation instruction on the virtual human body model.
  • the first model information includes a graphical mark.
  • the shape of the graphic mark can be a square, a circle, a diamond or any shape, and the specific shape of the graphic mark is not limited here.
  • the graphical mark performs at least one operation of selecting, moving, zooming out and zooming in based on user operation instructions.
  • a selection operation instruction input by the user is received, a graphical mark corresponding to the selection operation instruction is displayed.
  • the graphical mark can not only move in the XOY plane where the virtual human body model is located, but also can move in the XOZ plane and/or the YOZ plane where the virtual human body model is located.
  • the movement includes translation and turn.
  • the model image on any level and angle of the virtual mannequin can be selected by moving the graphical marker.
  • an internal image of the human body corresponding to the graphic mark is determined according to the size of the graphic mark and the position of the graphic mark relative to the virtual human body model.
  • the position of the graphical mark relative to the virtual human body model includes the angle and position coordinates of the graphical mark relative to the virtual human body model.
  • FIG. 5 is a schematic diagram of an interactive interface provided by an embodiment of the present application. As shown in FIG. 5 , the image on the left contains the virtual human body model and the graphical markers (black boxes) on the virtual human body model, and the image on the right represents the internal image of the human body corresponding to the graphical markers in the virtual human body model.
  • the method further includes: when the first position information is the first model information, according to the first model information and the associated relationship, determine the first imaging position corresponding to the medical imaging device, and control the medical imaging device to move to the first imaging position.
  • the technical effect achieved by this embodiment is: during the imaging process of the medical imaging device, the user views the internal image of the human body by inputting at least one type of first model information, and during the viewing process, the imaging components corresponding to the medical imaging device follow the first model. move as information changes.
  • the internal image of the human body corresponding to the second model information corresponds to the target shooting position
  • the current imaging position of the imaging component of the medical imaging device is the target imaging position.
  • determining the target imaging position corresponding to the medical imaging device according to the first position information includes: when the first position information is the first model information, according to the first model information and the The association relationship is determined, the target imaging position corresponding to the medical imaging device is determined, and the medical imaging device is controlled to move to the target imaging position.
  • the technical effect achieved by this embodiment is: during the imaging process of the medical imaging device, the user views the internal image of the human body by inputting at least one type of first model information, and during the viewing process, the imaging component of the medical imaging device may not follow the first model information. change and move.
  • the medical imaging device is controlled to move the target imaging position from the initial position based on the target device information.
  • the method further includes: controlling an imaging component in the medical imaging device to perform an imaging operation based on a field of view parameter; wherein the The field of view parameter includes at least one of a source-image distance, a source-object distance, and a magnification.
  • the source-image distance represents the distance between the radiation source and the detector
  • the source-object distance represents the distance between the radiation source and the imaging object
  • the magnification represents the magnification of the imaging image.
  • the technical solution of this embodiment by displaying the virtual human body model and the graphical mark on the interactive interface, solves the problem that the user directly inputs the first model information, and the user can intuitively observe the virtual human body model and change the graphical mark and the virtual
  • the positional relationship of the human body model achieves the purpose of previewing the internal structure of the human body at any relative position without having to perform perspective on the actual human body.
  • FIG. 6 is a flowchart of an imaging positioning method of a medical imaging device provided by the present application, and the technical solution of this embodiment is a further refinement on the basis of the foregoing embodiment.
  • the determining, based on the first position information, the internal image of the human body corresponding to the first position information in the virtual human body model includes: when the first position information is first device information, According to the first device information and the association relationship, first model information corresponding to the virtual human body model is determined, and an internal image of the human body is determined based on the first model information.
  • the user operation instruction may be a parameter input operation instruction, and the first device information input by the user is received.
  • the device parameter information of the medical imaging device input by the user is used as the first device information; in another
  • the system coordinates of the medical imaging device are calibrated in advance, the coordinates of various components of the medical imaging device at each location can be updated in real time and known to the system. In this way, an operator, such as a doctor, moves the medical imaging device from the first position to the second position, and the system can know the system coordinates of the components of the medical imaging device, whether in the first position or the second position.
  • S620 Determine first model information corresponding to the virtual human body model according to the first device information and the association relationship, and determine an internal image of the human body based on the first model information.
  • the first device information input by the user is converted into the first model information corresponding to the virtual human body model, and the internal image of the human body corresponding to the first model information is determined.
  • the association relationship includes a matching relationship between a field of view parameter in the first device information, such as a source image distance, and an image depth in the first model information.
  • the source-image distance represents the distance between the ray source and the detector.
  • FIG. 7 is a schematic diagram of an imaging scene of a virtual human body model provided by an embodiment of the present application.
  • the three solid lines and one dashed line from the ray source represent the beam from the ray source, ie the cone beam from the ray source passes through the virtual mannequin.
  • FIG. 7 shows the corresponding field of view at different depth levels of the virtual human body model during the propagation of the beam. It can be seen intuitively from Fig. 7 that, on the propagation path of the beam, the field of view corresponding to different depth levels is different.
  • the medical imaging equipment includes digital X-ray equipment, C-arm X-ray equipment, mammography machine, computed tomography equipment, magnetic resonance equipment, positron emission tomography PET equipment, positron emission tomography equipment Emission tomography and computed tomography PET-CT equipment, positron emission tomography and magnetic resonance imaging PET-MR equipment or radiotherapy imaging RT equipment.
  • the imaging position in the first device information is used as the target imaging position corresponding to the medical imaging device.
  • the method further includes: displaying the virtual human body model on the interactive interface, and displaying a graphical mark on the virtual human body model based on the first device information.
  • the first model information is determined according to the first device information and the second matching information, and a graphical mark is displayed according to the first model information.
  • the content disclosed in the first embodiment can be applied to the second embodiment.
  • the imaging object is photographed in one shooting cycle, and the ray source is obtained during the shooting process
  • First fluoroscopic data for irradiating the imaging object at a first energy, and second fluoroscopic data for irradiating the imaging object under a second energy different from the first energy by the radiation source are acquired.
  • the target imaging position of the imaging object is photographed in a plurality of consecutive photographing periods.
  • a dynamic image of the imaging object is displayed according to the first fluoroscopic data and the second fluoroscopic data acquired in each of the plurality of consecutive shooting periods.
  • Angiography is an auxiliary inspection technology that uses the principle that X-rays cannot penetrate contrast agents to observe the lesions of blood vessels, and is widely used in clinical diagnosis and treatment of various diseases.
  • angiography is a minimally invasive technique, which requires inserting a catheter into the blood vessel to be measured, so as to inject a contrast agent into the blood vessel to be measured, and then use an imaging device to image the blood vessel to be measured.
  • the doctor When viewing the imaging results, in order to obtain comprehensive blood vessel information, the doctor needs to continuously zoom in or zoom out the image, so as to continuously switch images of different fields of view on the display screen.
  • FIG. 8 is a flowchart of an image display method provided by an embodiment of the present application. This embodiment is applicable to the case where an angiography device is used for imaging and the imaging result is displayed.
  • the method can be performed by an image display device.
  • the apparatus can be implemented in software and/or hardware, and the apparatus can be configured in an angiography apparatus. Specifically include the following steps:
  • Angiography is a technology for visualizing blood vessels in X-ray sequence images. By injecting contrast agent into the blood vessels, X-rays cannot penetrate the contrast agent. Therefore, angiography uses X-rays to scan and image the part to be measured. The agent makes the blood vessels in the measured part and other tissues form material density differences, so that the scanned images can accurately reflect the location and degree of vascular lesions.
  • the first field of view parameter includes, but is not limited to, at least one of a first field of view range, a first magnification, a center position of the first field of view, and a first imaging mode.
  • the first field of view can be used to describe the size of the region corresponding to the imaging object in the first blood vessel image.
  • the first visual field range may be 100*100 or 30*50.
  • the first visual field range includes Part of the field of view of the abdomen.
  • the first magnification may be used to describe the magnification of the first blood vessel image to the imaging object, for example, magnifying the blood vessel by 100 times on the image.
  • the first magnification may be a relative magnification relative to a standard magnification, or may be an actual magnification of the blood vessel relative to the image.
  • the first magnification is a relative magnification relative to the standard magnification
  • the image corresponding to the first blood vessel image relative to the standard magnification is an enlarged image
  • the magnification ratio is smaller than the standard magnification ratio
  • the first blood vessel image is a reduced image relative to the image corresponding to the standard magnification ratio.
  • the center position of the first field of view may be used to describe the image center position of the first blood vessel image. From the perspective of angiography equipment, the center position of the first field of view may be used to describe the position on the detector corresponding to the center position of the beam of the radiation source .
  • the first imaging mode includes a perspective mode or an exposure mode.
  • the perspective mode is an imaging mode with a large radiation dose and a low image resolution
  • the exposure mode is an imaging mode with a small radiation dose and a high image resolution.
  • the user can switch the imaging mode by using the Zoom function on the angiography device.
  • displaying the first blood vessel image includes: displaying the first blood vessel image in a preset display area of the display device, or displaying the first blood vessel image on the entire display interface of the display device.
  • the parameter adjustment instruction includes a scaling adjustment ratio.
  • the zoom adjustment ratio is greater than one
  • the second magnification ratio in the second visual field parameter is greater than the first magnification ratio or the second visual field range is smaller than the first visual field range.
  • the zoom adjustment ratio is less than one
  • the second magnification ratio in the second visual field parameter is smaller than the first magnification ratio or the second visual field range is larger than the first visual field range.
  • the method further includes: if the first field of view parameter is the same as the second field of view parameter, generating prompt information.
  • the prompt information can be used to prompt the user that the blood vessel image corresponding to the parameter adjustment instruction already exists. Further, the user can also be prompted for the display position of the blood vessel image corresponding to the current field of view adjustment parameter.
  • the prompting method may be at least one of text prompting, voice prompting and prompting light.
  • a corresponding prompt light may be set for at least one display device, and if the blood vessel image currently displayed on the display device corresponds to the current parameter adjustment instruction, the prompt light flashes.
  • determining the second field of view parameter based on the received parameter adjustment instruction and the first field of view parameter includes: determining the second field of view based on the zoom adjustment ratio and the first magnification in the first field of view parameter Second magnification in parameters.
  • the first magnification ratio in the first field of view parameter is 2 times relative to the preset standard magnification ratio
  • the zoom adjustment ratio is 3 times
  • the second magnification ratio is 6 times.
  • determining the second field of view parameter based on the received parameter adjustment instruction and the first field of view parameter includes: determining the second field of view based on the zoom adjustment ratio and the first field of view range in the first field of view parameter Second field of view range in parameters.
  • the first field of view range is 10*10
  • the zoom adjustment ratio is 2 times
  • the second field of view range is 5*5.
  • the second magnification ratio and the second field of view are calculated based on the first magnification ratio, the first field of view range, and the zoom adjustment ratio in the first field of view parameters.
  • the center position of the second field of view may be the center position of the first field of view in the first field of view parameter by default
  • the second imaging mode may be the first imaging mode in the first field of view parameter by default.
  • the parameter adjustment instruction further includes at least one of a field of view adjustment range, a center position of the field of view adjustment, and an imaging adjustment mode.
  • the user can customize the second field of view range, the center position of the second field of view, and the second imaging mode in the parameters of the second field of view. are respectively used as the second field of view range, the center position of the second field of view and the second imaging mode.
  • the parameter adjustment instruction may be generated based on the parameter value input by the user, or generated based on the selection operation input by the user based on the displayed first blood vessel image.
  • the user may realize the input of the zoom adjustment ratio by using the Zoom function on the angiography device.
  • the user may also input an image selection frame based on the first blood vessel image, and generate a field of view adjustment range according to the size of the inputted image selection frame.
  • determining the second blood vessel image based on the second visual field parameter includes: acquiring a second blood vessel image acquired by an angiography device based on the second visual field parameter. Specifically, the angiography device is controlled to acquire the second blood vessel image based on the second visual field parameter.
  • the embodiment of the present application uses the synchronous display of two blood vessel images for exemplary explanation.
  • more blood vessel images corresponding to different visual field parameters can be obtained according to the image display method provided by the embodiment of the present application, and
  • the images of each blood vessel are displayed synchronously.
  • the number of times is not limited to the number of blood vessel images displayed simultaneously.
  • the second blood vessel image is determined based on the received parameter adjustment instruction while the first blood vessel image is displayed, and the second blood vessel image is displayed synchronously with the first blood vessel image, thereby solving the problem of different visual fields.
  • the problem of repeated switching and display of scanned images reduces the number of repeated imaging and the amount of radiation, improves the service life of angiography equipment, improves the diagnostic efficiency of angiography, and reduces the diagnostic results of angiography caused by repeated switching. error.
  • FIG. 9 is a flowchart of an image display method provided by an embodiment of the present application. This embodiment is a further refinement on the basis of the above-mentioned embodiment.
  • the second blood vessel image is combined with the The synchronous display of the first blood vessel image includes: when a first visual field parameter corresponding to the first blood vessel image changes, acquiring an updated first blood vessel image collected by an angiography device based on the changed first visual field parameter; The updated first visual field parameter and the parameter adjustment instruction determine to update the second visual field parameter, and the updated second blood vessel image determined based on the updated second visual field parameter and the updated first blood vessel image are displayed synchronously.
  • S910 Acquire a first blood vessel image of the imaging object collected by the angiography device based on the first visual field parameter, and display the first blood vessel image.
  • step S910 is specifically: using an angiography device to shoot an imaging object within a shooting cycle, and obtaining the image of the imaging object irradiated by the angiography device under the first energy based on the first field of view parameter.
  • a first blood vessel image of the imaging object is determined according to the first fluoroscopic data and the second fluoroscopic data, and the first blood vessel image is displayed.
  • step S910 is specifically: acquiring a virtual human body model corresponding to the imaging object.
  • the virtual human body model is displayed on the interactive interface, and the first model information corresponding to the user operation instruction is displayed on the virtual human body model.
  • an internal human body image corresponding to the first model information in the virtual human body model is determined, and the internal human body image is displayed. If the internal image of the human body corresponding to the first model information corresponds to the position of the first blood vessel captured by the target, the first blood vessel position corresponding to the medical imaging device is determined according to the first model information.
  • the second model information that is different from the first model information is continued to be acquired, and the internal image of the human body corresponding to the second model information is used to determine the medical The first blood vessel location corresponding to the imaging device. After the position of the first blood vessel is determined, a first blood vessel image of the imaging object acquired by the angiography device based on the first field of view parameter is acquired, and the first blood vessel image is displayed.
  • S920 Determine a second visual field parameter based on the received parameter adjustment instruction and the first visual field parameter, and determine a second blood vessel image based on the second visual field parameter.
  • step S920 is specifically as follows: determining the second field of view parameter based on the received parameter adjustment quality and the first home parameter, and using an angiography device to perform imaging on the imaging object within one shooting cycle. Shooting, acquiring third fluoroscopic data of the angiography device irradiating the imaging object at a first energy based on the second field of view parameter, and during the shooting process, and acquiring a ray source irradiating the imaging object at a second energy different from the first energy Fourth Pivot Data. After that, a second blood vessel image of the imaging object is determined according to the third fluoroscopic data and the fourth fluoroscopic data, and the second blood vessel image is displayed.
  • step S920 is specifically: acquiring a virtual human body model corresponding to the imaging object.
  • the virtual human body model is displayed on the interactive interface, and the first model information corresponding to the user operation instruction is displayed on the virtual human body model.
  • an internal human body image corresponding to the first model information in the virtual human body model is determined, and the internal human body image is displayed. If the internal image of the human body corresponding to the first model information corresponds to the position of the second blood vessel captured by the target, the second blood vessel position corresponding to the medical imaging device is determined according to the first model information.
  • the internal image of the human body corresponding to the first model information does not correspond to the target location of the second blood vessel, continue to acquire second model information that is different from the first model information, and determine and medical imaging according to the internal image of the human body corresponding to the second model information
  • the position of the second blood vessel corresponding to the device After the second blood vessel position is determined, a first blood vessel image of the imaging object acquired by the angiography device based on the first field of view parameter is acquired, and the first blood vessel image is displayed.
  • the imaging object includes a blood vessel and a catheter inserted into the blood vessel
  • the second field of view parameter is determined based on the received parameter adjustment instruction and the first field of view parameter, and further includes: when the zoom adjustment ratio is greater than one, The catheter tip image corresponding to the catheter in the first blood vessel image is acquired, and the visual field center position and the visual field range corresponding to the catheter tip image are respectively used as the second visual field center position and the second visual field range in the second visual field parameter.
  • Angiography is a technique of injecting contrast agents into target blood vessels through catheter intervention.
  • a catheter is inserted into a blood vessel, so that after the contrast agent is injected into the catheter, the contrast agent is delivered to a designated location of the blood vessel through the catheter.
  • real-time imaging of the blood vessel is required in order to observe the position of the catheter in the blood vessel.
  • the catheter image is included in the first blood vessel image.
  • the catheter image includes a catheter tip image, that is, an image obtained after imaging the catheter tip.
  • the catheter top image corresponding to the catheter in the first blood vessel image is acquired, and specifically, based on a preset division rule, the first blood vessel image is divided to obtain the catheter top image.
  • the preset division rule includes a preset image size and preset image shapes, etc.
  • the preset image shape may be a square, a rectangle, a circle or an irregular shape.
  • the preset image size may be determined according to the preset image shape. Exemplarily, when the preset image shape is a square, the preset image size may be 10mm*10mm, and when the preset image shape is a circle, the preset image size The size can be a radius of 5mm.
  • the preset image size can also be the image area of the top image of the guide tube.
  • the advantage of this setting is that, by automatically identifying the catheter tip image in the first blood vessel image, the step of manually selecting the image of the region of interest is avoided.
  • the first blood vessel image changes real-time tracking and identification of the catheter tip image in the changed first blood vessel image can also be realized, thereby greatly improving the diagnostic efficiency of angiography in the subsequent angiography process.
  • determining the second blood vessel image based on the second visual field parameter includes: acquiring the second blood vessel image collected by the angiography device based on the second visual field parameter; or, when the scaling adjustment ratio is greater than one , determining the reference blood vessel image in the first blood vessel image based on the second visual field center position and the second visual field range in the second visual field parameter, and determining the second blood vessel image based on the second magnification in the second visual field parameter and the reference blood vessel image .
  • the angiography device can be controlled to acquire the second blood vessel image based on the second visual field parameter.
  • the scaling adjustment ratio is less than one, it means that the zoom-out operation is performed on the first blood vessel image.
  • the second field of view is usually larger than the first field of view, that is, the first blood vessel image does not contain all the images required by the second blood vessel image. In this case, it is necessary to re-control the angiography device to acquire the second blood vessel image based on the second visual field parameter.
  • the scaling adjustment ratio when the scaling adjustment ratio is greater than one, it means that a zoom-in operation is performed on the first blood vessel image, and at this time, the first blood vessel image usually contains all the image information required by the second blood vessel image. Of course, there may also be cases where the first blood vessel image usually does not contain all the image information required by the second blood vessel image.
  • a method of controlling the angiography device to acquire the second blood vessel image based on the second visual field parameter may also be used.
  • the first blood vessel image includes all the image information required by the second blood vessel image as an example
  • the reference blood vessel image includes all the image information required by the second blood vessel image
  • the magnification ratio corresponding to the reference blood vessel image is the first magnification ratio , so it is also necessary to perform a magnification operation on the reference blood vessel image based on the second magnification ratio in the second field of view parameter to obtain the second display image.
  • the algorithm in the process of performing the enlargement operation includes, but is not limited to, at least one of a nearest neighbor interpolation algorithm, a bilinear interpolation algorithm, and a higher-order interpolation algorithm.
  • the advantage of this setting is that by directly intercepting the reference blood vessel image from the first blood vessel image, and directly obtaining the second blood vessel image based on the reference blood vessel image and the second field of view parameters, the number of repeated imaging and the amount of radiation are further reduced, and the blood vessel is improved.
  • the service life of imaging equipment is that by directly intercepting the reference blood vessel image from the first blood vessel image, and directly obtaining the second blood vessel image based on the reference blood vessel image and the second field of view parameters, the number of repeated imaging and the amount of radiation are further reduced, and the blood vessel is improved. The service life of imaging equipment.
  • displaying the second blood vessel image and the first blood vessel image synchronously includes: simultaneously displaying the second blood vessel image and the first blood vessel image.
  • S960 synchronously displaying the updated second blood vessel image determined based on the updated second visual field parameter and the updated first blood vessel image.
  • displaying the second blood vessel image and the first blood vessel image synchronously includes: on a display device corresponding to the first blood vessel image, displaying the second blood vessel image in a different image from the first blood vessel image.
  • the display area of the image is displayed; or, the second blood vessel image is displayed on a display device different from the first blood vessel image.
  • the display interface of the same display device includes at least two areas for displaying images, wherein one display area is used to display the first display image, and the other display area is used to display the second display image.
  • the angiography device includes at least two display devices, one of which is used to display the first display image and the other display device is used to display the second display image.
  • the obtained updated first blood vessel image and updated second blood vessel image can also be displayed in different display areas of the display device, or can also be displayed on different displays displayed on the device.
  • the updated second visual field parameter is determined based on the changed first visual field parameter and the parameter adjustment instruction, and the updated second blood vessel image determined based on the updated second visual field parameter is synchronized with the updated first blood vessel image
  • Breast cancer is an important disease that seriously threatens women's health worldwide.
  • Mammography is currently recognized as the preferred modality for breast cancer.
  • digital breast tomosynthesis technology also known as digital breast tomosynthesis (DBT)
  • Digital breast tomography is a three-dimensional imaging technology that can be used in different The projection data of the breast is obtained from the angle, and these individual images are then reconstructed into a series of high-resolution 3D tomographic images of the breast. These tomographic images are displayed individually or dynamically in a continuous playback. Each tomographic image shows the structure of each slice of the breast, and a three-dimensional tomographic image of the entire breast represents the reconstructed breast.
  • some projection methods such as maximum density projection, average projection, etc.
  • 3D tomographic image to obtain a 2D image similar to a 2D digital mammography image.
  • a better image can only be obtained in the direction parallel to the detector, but information in other directions cannot be obtained, which is not conducive to The user observes the overall distribution and location of the lesions.
  • FIG. 10 is a schematic flowchart of a method for generating a volume reconstruction image provided by an embodiment of the present application to solve the above problem.
  • This embodiment can be applied to the case of determining a target volume reconstruction image in a desired reconstruction direction.
  • the advantage is that the desired reconstruction image can be obtained.
  • the reconstructed image of the target volume in the reconstruction direction and in the direction perpendicular to the desired reconstruction direction, that is, the image information of the reconstructed image of the target volume is rich and comprehensive, which is helpful for users to quickly locate the region of interest on the reconstructed image of the target volume.
  • the volume reconstruction image generation device is performed, wherein the device can be implemented by software and/or hardware and is generally integrated in a terminal or control device. Specifically, as shown in FIG. 10, the method may include the following steps:
  • the tube can be controlled by the image acquisition device to emit X-rays to the scanning object at different angles, and the different angles are the scanning angles.
  • the projection data may be data received by a detector of an image acquisition device.
  • the image acquisition device in this embodiment may be a digital breast tomography (Digital Breast Tomosynthesis, DBT for short) device, and the tube of the digital breast tomography device emits X-rays to the scanned object, and receives the transmitted scan through a detector.
  • the X-ray of the object and the X-ray transmitted through the scanned object are converted into digitized projection data, and the projection data under each scanning angle is sent to the control device.
  • the scanning angle of the tube of the digital breast tomography apparatus is in the range of 15 degrees.
  • the first field of view parameter of each scan angle is determined, and the second field of view parameter of each scan angle is determined according to the parameter adjustment instruction of each scan angle and the first field of view parameter of each scan angle. Then, collect the first projection image of each scanning angle based on the first field of view parameter of each scanning angle and the second projection image based on the second field of view parameter of each scanning angle, and collect the first projection image and the second projection based on the first projection image and the second projection of each scanning angle image to determine projection data at each scan angle.
  • the first projection image and the second projection image of each scanning angle may be weighted to determine the projection data under the scanning angle.
  • the desired reconstruction direction may be a vector in any direction in the initial volume coordinate system, and may vary with different scanning angles.
  • the volume reconstruction technology is used to reconstruct the projection data, and a three-dimensional structure model of the projection data is constructed. Volume reconstruction is performed on the projection data based on the desired reconstruction direction and the initial volume coordinate system, which facilitates the intuitive display of the projection information invisible to the naked eye in the form of tomographic images.
  • the volume reconstruction technology can be understood as a volume rendering technology.
  • the specific method is to assign a light blocking degree to each voxel in the three-dimensional structure model, and consider the transmission, reflection and reflection of each voxel to light.
  • the transmission of light depends on the opacity of the voxel;
  • the reflection of light depends on the materiality of the voxel, the greater the materiality, the stronger the reflected light; the reflection of light depends on the angle between the surface where the voxel is located and the incident light .
  • the steps of volume rendering are divided into four steps: projection, concealment, rendering and synthesis.
  • Volume rendering algorithms can be divided into spatial domain methods and transform domain methods according to different processing data domains.
  • the spatial domain method directly processes and displays the original spatial data, and the transform domain method transforms the volume data into the transform domain. Then perform processing and display.
  • the method for determining the initial volume coordinate system includes: performing volume reconstruction on the projection data to obtain an initial volume reconstructed image, and constructing the initial volume coordinate system based on the initial volume reconstructed image.
  • the first coordinate origin of the initial volume coordinate system is any position point of the side length to which the first vertical axis of the reconstructed volume corresponding to the initial volume reconstructed image belongs; the first horizontal axis is on the plane to which the first vertical axis belongs.
  • the first vertical axis takes the first coordinate origin as the starting point and is perpendicular to the first horizontal axis and the first vertical axis.
  • the first coordinate origin may be the vertex, midpoint, or one-third point of the side length to which the first vertical axis of the reconstructed volume (volume) belongs.
  • the midpoint is used as the first coordinate origin.
  • Fig. 11 is a schematic diagram of the definition of the initial volume coordinate system.
  • the reconstructed volume in Fig. 11 can be the above-mentioned three-dimensional structural model.
  • the volume reconstruction technique is used to perform the volume reconstruction on the projection data. Reconstructing, obtaining an initial volume reconstructed image and the reconstructed volume, and constructing an initial volume coordinate system based on the initial volume reconstructed image. Specifically, the first coordinate origin O of the initial volume coordinate system in FIG.
  • the initial volume coordinate system is constructed based on the determined first coordinate origin O, the first horizontal axis X, the first vertical axis Y, and the first vertical axis Z on the bottom surface, which is parallel to the side length (VolumeZ).
  • a method for constructing a target volume coordinate system includes: determining a reconstruction volume to which the initial volume coordinate system belongs; in the reconstruction volume, based on the desired reconstruction direction, determining a first reconstruction volume perpendicular to the desired reconstruction direction.
  • a current plane a second current plane is constructed based on the first vertical axis and the first vertical axis, and the midpoint of the intersection of the second current plane and the first current plane is used as the second coordinate of the target volume coordinate origin, and the desired reconstruction direction is used as the second vertical axis of the target volume coordinate system; in the first current plane, the side length to which the second coordinate origin belongs is used as the second vertical axis of the target volume coordinate system
  • the vertical axis is taken as the starting point of the second coordinate origin, and the vector perpendicular to the second vertical axis and the plane where the second vertical axis is located is taken as the second horizontal axis.
  • the first current plane is a plane where any tomographic image of the three-dimensional structural model is located.
  • the method for determining the reconstructed volume to which the initial volume coordinate system belongs is: intercepting the positive coordinate axis and the negative coordinate axis of the first longitudinal axis Y respectively. D are respectively intercepted on the positive coordinate axes of the first horizontal axis X and the first vertical axis Z, and a cube is constructed according to the above intercepted distance, and the constructed cube is used as the reconstruction volume.
  • the determination process of the target volume coordinate system is explained with reference to FIG. 11 and FIG. 12 .
  • Obtain the desired reconstruction direction determine the first current plane perpendicular to the desired reconstruction direction, construct a second current plane based on the first vertical axis Y and the first vertical axis Z, and set the midpoint of the intersection of the second current plane and the first current plane As the second coordinate origin of the target volume coordinates, take the desired reconstruction direction as the second vertical axis Z' in FIG.
  • the vector of the plane with the second coordinate origin as the starting point and perpendicular to the second vertical axis Y' and the second vertical axis Z' is taken as the second horizontal axis X'.
  • the projection data may be reconstructed by means of an iterative reconstruction algorithm, a filtered back-projection reconstruction algorithm, a back-projection filtered reconstruction algorithm, etc., to generate a target volume reconstruction image.
  • the target volume reconstructed image may be perpendicular to the desired reconstruction direction, and may also be at other angles to the desired reconstruction direction.
  • the control device After the control device acquires the projection data at different scanning angles, it can determine each desired reconstruction direction corresponding to each scanning angle, and reconstruct the projection data according to each desired reconstruction direction, and obtain a result that is perpendicular to the desired reconstruction direction or is different from the desired reconstruction direction. It is expected that the reconstructed image of the target volume at other angles will be reconstructed.
  • the volume reconstruction image with better resolution can only be obtained in the direction parallel to the detector, and information in other directions cannot be obtained.
  • volume reconstruction images with better resolution in multiple directions are obtained, which is beneficial for users to analyze the volume reconstruction images in multiple directions and perform target positioning.
  • a target volume coordinate system is constructed based on the initial volume coordinate system and the desired reconstruction direction by acquiring projection data at each scanning angle, and in the target volume coordinate system, according to the desired reconstruction direction
  • the projection data is reconstructed to generate a target volume reconstructed image.
  • FIG. 13 is a schematic flowchart of a method for generating a volume reconstructed image according to an embodiment of the present application.
  • the reconstructing the projection data according to the desired reconstruction direction to generate a target volume reconstructed image includes: based on the angle difference between the initial volume coordinate system and the target volume coordinate system, and the initial volume coordinate system.
  • the pixel range under the target volume coordinate system is determined; in the pixel range under the target volume coordinate system, the projection data is evaluated according to the desired reconstruction direction.
  • a reconstruction is performed to generate the target volume reconstructed image.
  • the method may include the following steps:
  • the method as described in the previous embodiment determines the initial volume coordinate system and the target volume coordinate system, and the control device determines the coordinate information of the first coordinate origin O and the second coordinate origin O', the first horizontal axis X and the second horizontal axis respectively.
  • the angular difference between X', the angular difference between the first longitudinal axis Y and the second longitudinal axis Y', and the angular difference between the first vertical axis Z and the second vertical axis Z' that is, to determine the initial volume
  • control device determines the pixel range of the initial volume reconstructed image in each direction of the initial volume coordinate system according to the pixel range of the initial volume reconstructed image, and further combines the angle difference between the initial volume coordinate system and the target volume coordinate system to determine the initial volume
  • the pixel range of the reconstructed image is determined in the target volume coordinate system.
  • each side length of the reconstructed volume shown in Figure 11 and Figure 12 are VolumeX, VolumeY and VolumeZ respectively, and the first coordinate origin O of the initial volume coordinate system is the reconstruction.
  • the midpoint of the side length of the first vertical axis Y of the volume (volume), the pixel range of the initial volume reconstructed image in the direction of the first horizontal axis X of the initial volume coordinate system is PixelSizeX, and the initial volume reconstructed image is in the initial volume coordinate system.
  • the pixel range in the Y direction of the first vertical axis is PixelSizeY
  • the pixel range of the initial volume reconstructed image in the Z direction of the first vertical axis of the initial volume coordinate system is PixelSizeZ.
  • the first horizontal axis is defined as: ReconCoordinateX, [0:VolumeX]*PixelSizeX
  • the first vertical axis is defined as: ReconCoordinateY
  • the first vertical axis is defined as: ReconCoordinateZ, [0:VolumeZ]*PixelSizeZ; wherein, the ReconCoordinateX represents the X axis of the initial volume coordinate system, and the ReconCoordinateY represents the Y axis of the initial volume reconstruction coordinate system,
  • the ReconCoordinateZ represents the Z-axis of the initial volume reconstruction coordinate system.
  • the second coordinate origin O' is the midpoint of the intersection of the second current plane and the first current plane
  • the second horizontal axis X' is defined as: ReconCoordinateXNew,[0:VolumeX/cos ⁇ ]*PixelSizeX
  • the second vertical axis Y' is defined as: ReconCoordinateYNew
  • the desired reconstruction direction Z' (ie the second vertical axis) is defined as:
  • ReconCoordinateZNew(z) ReconCoordinateYNew(y)*sin ⁇ +ReconCoordinateYNew(x)*sin ⁇ +ReconCoordinateZ(z)/cos ⁇ .
  • represents the angle between the second vertical axis Y' and the first vertical axis Y
  • represents the angle between the second horizontal axis X' and the first horizontal axis X
  • represents the second vertical axis Z'
  • the angle between the first vertical axis Z; the ReconCoordinateXNew represents the X axis of the target volume coordinate system (ie the second horizontal axis X'), and the ReconCoordinateYNew represents the Y axis of the target volume coordinate system (ie The second vertical axis Y'), the ReconCoordinateZNew(z) represents the Z axis of the target volume coordinate system (that is, the second vertical axis Z');
  • [0:VolumeX/cos ⁇ ]*PixelSizeX represents the X of the target volume coordinate system the range of pixels on the axis (i.e.
  • the second horizontal axis X' Expressed as the pixel range on the Y axis of the target volume coordinate system (ie the second vertical axis Y'), the pixel range on the Z axis of the target volume coordinate system (ie the second vertical axis Z') is 0.
  • the pixel range included in the target volume coordinate system is determined through the foregoing steps, and the target volume reconstruction image can be generated by reconstructing the projection data in the pixel range by using an iterative reconstruction algorithm, a filtered back-projection reconstruction algorithm, and a back-projection filter reconstruction algorithm.
  • the target volume reconstructed image may be perpendicular to the desired reconstruction direction, and may also be at other angles to the desired reconstruction direction.
  • the method before reconstructing the projection data, further includes: preprocessing the projection data.
  • the preprocessing includes at least one of image segmentation, gray value transformation, and window width and window level transformation.
  • the image segmentation can use threshold-based image segmentation, region growth, etc. to screen projection data to reduce the amount of calculation for volume reconstruction;
  • the grayscale value transformation can use image inversion, logarithmic transformation, and gamma.
  • Horse transformation and other methods are used to improve the contrast of projection data, which is beneficial to improve the contrast of the volume reconstructed image and facilitate the user to analyze the reconstructed image;
  • the window width and window level transformation can be used to increase the window width, reduce the window width or change the center of the window level Point and other methods can remove noise data of projection data, which is beneficial to improve the reconstruction efficiency of projection data.
  • control device can also obtain an image rendering instruction of the target volume reconstructed image;
  • the reconstructed image of the target volume is displayed.
  • the pixel value of the pixel point in the region of interest determined by the user is increased to highlight the image of interest, which is convenient for the user to analyze the image.
  • the target volume is determined based on the angle difference between the initial volume coordinate system and the target volume coordinate system, and the pixel range of the initial volume reconstructed image in the initial volume coordinate system
  • the pixel range under the coordinate system is within the pixel range under the target volume coordinate system, and the projection data is reconstructed according to the desired reconstruction direction to generate the target volume reconstruction image.
  • the pixel range in the target volume coordinate system can be accurately determined, which further facilitates the generation of volume reconstruction images with better resolution in all directions.
  • the dynamic perspective system 1400 may include a photographing module 1410 and a display module 1420 .
  • the dynamic fluoroscopy system 1400 may be implemented by the dynamic fluoroscopy system 100 (eg, processing device 140 ) shown in FIG. 1 .
  • the photographing module 1410 can be used to photograph the subject in one photographing cycle, acquire first fluoroscopic data of the ray source irradiating the subject at the first energy during the photographing process, and acquire the ray source at a different time from the subject. second perspective data for illuminating the subject at a second energy of the first energy.
  • the acquisition module 1410 may also be configured to perform the photographing of the subject in a plurality of consecutive photographing cycles.
  • the display module 1420 may be configured to display the dynamic image of the subject according to the first perspective data and the second perspective data acquired in each of the plurality of continuous shooting periods.
  • a dynamic perspective apparatus including at least one processing device 140 and at least one storage device 150; the at least one storage device 150 is used to store computer instructions, and the at least one processing device 140 is for executing at least part of the computer instructions to implement the dynamic perspective method 200 as described above.
  • a computer-readable storage medium stores computer instructions, and when the computer instructions are read by a computer (such as the processing device 140 ), the processing device 140 executes the above The dynamic perspective method 200 is described.
  • the above description of the dynamic fluoroscopy system and its devices/modules is only for the convenience of description, and cannot limit the present application to the scope of the illustrated embodiments. It can be understood that for those skilled in the art, after understanding the principle of the system, it is possible to arbitrarily combine various devices/modules without departing from this principle, or form a subsystem to connect with other devices/modules .
  • the photographing module 1410 and the display module 1420 disclosed in FIG. 14 may be different modules in one apparatus (eg, the processing device 140 ), or may be one module implementing the functions of two or more modules described above.
  • the shooting module 1410 and the display module 1420 may be two modules, or one module may have the functions of capturing and displaying dynamic images at the same time.
  • each module may have its own storage module.
  • each module may share one storage module.
  • the photographing module 1410 may include a first photographing sub-module and a second photographing sub-module, wherein the first photographing sub-module may be used to obtain a first photographing sub-module that irradiates the subject with the ray source at the first energy during the photographing process.
  • Perspective data, the second photographing sub-module can be used to acquire second perspective data of the radiation source irradiating the subject under the second energy.
  • the possible beneficial effects of the embodiments of the present application include, but are not limited to: (1) helping medical staff to quickly understand the focus of the subject and/or the dynamic changes of various tissues and organs in multiple shooting cycles; (2) being able to use dual-energy Subtraction technology obtains the desired image class. It should be noted that different embodiments may have different beneficial effects, and in different embodiments, the possible beneficial effects may be any one or a combination of the above, or any other possible beneficial effects.
  • FIG. 15 is a schematic diagram of an imaging positioning apparatus of a medical imaging device according to Embodiment 4 of the present application. This embodiment can be applied to the situation of locating the target part, the device can be implemented by software and/or hardware, and the imaging locating device includes: a virtual human body model acquisition module 1510 , a human body internal image display module 1520 , a first target The imaging position determination module 1530 and the second target imaging position determination module 15150.
  • the virtual human body model obtaining module 1510 is used to obtain the virtual human body model corresponding to the imaging object, and obtain the first position information corresponding to the user operation instruction;
  • the internal human body image display module 1520 is configured to determine the internal human body image corresponding to the first position information in the virtual human body model based on the first position information, and display the internal human body image;
  • a first target imaging position determination module 1530 configured to determine a target imaging position corresponding to the medical imaging device according to the first position information if the internal image of the human body corresponding to the first position information corresponds to a target shooting position ;
  • the second target imaging position determination module 1540 is configured to continue to acquire second position information different from the first position information if the internal image of the human body corresponding to the first position information does not correspond to the target shooting position, and The target imaging position corresponding to the medical imaging device is determined according to the internal image of the human body corresponding to the second position information.
  • the first location information includes first model information corresponding to the virtual human body model or first device information corresponding to the medical imaging device, wherein the difference between the first model information and the first device information There is an association relationship.
  • the association relationship includes a position association relationship
  • the device further includes:
  • a position correlation determination module configured to convert the relative position relationship between the medical imaging device and the imaging object into a position correlation relationship between the medical imaging device and the virtual human body model; wherein the position correlation relationship is used to represent the first model The relationship of the location parameter between the information and the first device information.
  • association relationship further includes a visual field association relationship
  • device further includes:
  • the visual field correlation relationship acquisition module is used to acquire the visual field correlation relationship between the first model information and the first device information; wherein, the visual field correlation relationship is used to represent the relationship between the visual field parameters between the first model information and the first device information.
  • the virtual human body model obtaining module 1510 includes:
  • the virtual human body model is displayed on the interactive interface, and the first model information corresponding to the user operation instruction is displayed on the virtual human body model.
  • the first model information includes a graphical mark.
  • the graphical mark performs at least one operation of selection, movement, reduction and enlargement based on user operation instructions.
  • the device further includes:
  • the first imaging position corresponding to the medical imaging device is determined according to the first model information and the association relationship, and the medical imaging device is controlled to move to the first imaging position Location.
  • the first target imaging position determination module 1530 includes:
  • a first target device information determination unit configured to determine a target imaging position corresponding to the medical imaging device according to the first model information and the association relationship when the first position information is the first model information, and The medical imaging device can be controlled to move to a target imaging position.
  • the device further includes:
  • the imaging operation execution module is configured to control the imaging component in the medical imaging device to perform the imaging operation based on the field of view parameter; wherein the field of view parameter includes at least one of a source-image distance, a source-object distance, and a magnification.
  • the internal image display module 1520 of the human body is specifically used for:
  • first model information corresponding to the virtual human body model is determined according to the first device information and the association relationship, and an internal image of the human body is determined based on the first model information .
  • the first target imaging position determination module 1530 includes:
  • the second target device information determining unit is configured to use the imaging position in the first device information as the target imaging position corresponding to the medical imaging device when the first position information is the first device information.
  • the virtual human body model obtaining module 1510 is specifically used for:
  • a virtual human body model corresponding to the height data is selected; wherein, the virtual human body model includes a human body shape model and an internal human body model.
  • the internal model of the human body includes at least one of a blood vessel model, an organ model, a bone model and a muscle model.
  • the medical imaging equipment includes digital X-ray imaging equipment, C-arm X-ray equipment, mammography machine, computed tomography equipment, magnetic resonance equipment, positron emission tomography PET equipment, Positron emission tomography and computed tomography PET-CT equipment, positron emission tomography and magnetic resonance imaging PET-MR equipment or radiotherapy imaging RT equipment.
  • the imaging positioning device provided by the embodiment of the present application can be used to execute the imaging positioning method provided by the embodiment of the present application, and has corresponding functions and beneficial effects of the execution method.
  • the units and modules included are only divided according to functional logic, but are not limited to the above division, as long as the corresponding functions can be realized;
  • the specific names of the functional units are only for the convenience of distinguishing from each other, and are not used to limit the protection scope of the present application.
  • FIG. 16 is a schematic structural diagram of a medical imaging device according to Embodiment 5 of the present application.
  • This embodiment of the present application provides services for the implementation of the imaging positioning method in the above-mentioned embodiment of the present application, and the imaging and positioning device in the above-mentioned embodiment can be configured.
  • FIG. 16 shows a block diagram of an exemplary electronic device 16 suitable for use in implementing embodiments of the present application.
  • the electronic device 16 shown in FIG. 16 is only an example, and should not impose any limitations on the functions and scope of use of the embodiments of the present application.
  • the electronic device 16 takes the form of a general purpose computing device.
  • Components of electronic device 16 may include, but are not limited to, one or more processors or processing units 161 , system memory 162 , and a bus 163 connecting various system components including system memory 162 and processing unit 161 .
  • Bus 163 represents one or more of several types of bus structures, including a memory bus or memory controller, a peripheral bus, a graphics acceleration port, a processor, or a local bus using any of a variety of bus structures.
  • these architectures include, but are not limited to, Industry Standard Architecture (ISA) bus, Micro Channel Architecture (MAC) bus, Enhanced ISA bus, Video Electronics Standards Association (VESA) local bus, and Peripheral Component Interconnect ( PCI) bus.
  • Electronic device 16 typically includes a variety of computer system readable media. These media can be any available media that can be accessed by electronic device 16, including both volatile and non-volatile media, removable and non-removable media.
  • System memory 162 may include computer system readable media in the form of volatile memory, such as random access memory (RAM) 1621 and/or cache memory 1622 .
  • Electronic device 16 may further include other removable/non-removable, volatile/non-volatile computer system storage media.
  • storage system 1623 may be used to read and write to non-removable, non-volatile magnetic media (not shown in Figure 16, commonly referred to as a "hard drive”).
  • a disk drive for reading and writing to removable non-volatile magnetic disks eg "floppy disks"
  • removable non-volatile optical disks eg CD-ROM, DVD-ROM
  • each drive may be connected to bus 163 through one or more data media interfaces.
  • System memory 162 may include at least one program product having a set (eg, at least one) of program modules configured to perform the functions of various embodiments of the present application.
  • Program modules 1624 generally perform the functions and/or methods of the embodiments described herein.
  • the electronic device 16 may also communicate with one or more external devices 164 (eg, keyboards, pointing devices, display 1641, etc.), with one or more devices that enable a user to interact with the electronic device 16, and/or with Any device (eg, network card, modem, etc.) that enables the electronic device 16 to communicate with one or more other computing devices. Such communication may take place through input/output (I/O) interface 165 . Also, the electronic device 16 may communicate with one or more networks (eg, a local area network (LAN), a wide area network (WAN), and/or a public network such as the Internet) through a network adapter 166 . As shown in FIG. 16 , the network adapter 166 communicates with other modules of the electronic device 16 through the bus 163 . It should be understood that, although not shown, other hardware and/or software modules may be used in conjunction with electronic device 16, including but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives and data backup storage systems.
  • the processing unit 161 executes various functional applications and data processing by running the programs stored in the system memory 162 , for example, to implement the imaging positioning method provided by the embodiments of the present application.
  • the electronic device 16 may be a terminal device configured with an imaging positioning device, the terminal device being communicatively connected to the illumination system. In another embodiment, the electronic device 16 may also be an illumination system equipped with an imaging positioning device.
  • the problem of radiation damage to the human body is solved. While ensuring the realization of imaging positioning, the user can arbitrarily set parameters in the process of imaging positioning without causing any damage to the human body, thereby improving the accuracy of positioning. Spend.
  • Embodiment 5 of the present application further provides a storage medium containing computer-executable instructions, where the computer-executable instructions are used to execute an imaging positioning method when executed by a computer processor, and the method includes:
  • the internal image of the human body corresponding to the first position information corresponds to a target shooting position, determining a target imaging position corresponding to the medical imaging device according to the first position information;
  • the image inside the human body corresponding to the first position information does not correspond to the target shooting position, continue to acquire second position information that is different from the first position information, and according to the second position information
  • the image determines the target imaging location corresponding to the medical imaging device.
  • the computer storage medium of the embodiments of the present application may adopt any combination of one or more computer-readable media.
  • the computer-readable medium may be a computer-readable signal medium or a computer-readable storage medium.
  • the computer-readable storage medium may be, for example, but not limited to, an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus or device, or a combination of any of the above.
  • a computer-readable storage medium can be any tangible medium that contains or stores a program that can be used by or in conjunction with an instruction execution system, apparatus, or device.
  • a computer-readable signal medium may include a propagated data signal in baseband or as part of a carrier wave, with computer-readable program code embodied thereon. Such propagated data signals may take a variety of forms, including but not limited to electromagnetic signals, optical signals, or any suitable combination of the foregoing.
  • a computer-readable signal medium can also be any computer-readable medium other than a computer-readable storage medium that can transmit, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device .
  • Program code embodied on a computer readable medium may be transmitted using any suitable medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
  • Computer program code for carrying out the operations of the present application may be written in one or more programming languages, including object-oriented programming languages, such as Java, Smalltalk, C++, and conventional procedural languages, or a combination thereof.
  • a programming language such as the "C" language or similar programming language.
  • the program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer, or entirely on the remote computer or server.
  • the remote computer may be connected to the user's computer through any kind of network, including a local area network (LAN) or wide area network (WAN), or may be connected to an external computer (eg, using an Internet service provider to connect through the Internet) ).
  • LAN local area network
  • WAN wide area network
  • Internet service provider to connect through the Internet
  • a storage medium containing computer-executable instructions provided by the embodiments of the present application, the computer-executable instructions of which are not limited to the above method operations, and can also perform related operations in the imaging positioning method provided by any embodiment of the present application. .
  • FIG. 17 is a schematic diagram of an image display device according to Embodiment 3 of the present application. This embodiment can be applied to the case where an angiography device is used for imaging and the imaging result is displayed, the apparatus can be implemented in software and/or hardware, and the device can be configured in an angiography device.
  • the image display device includes: a first blood vessel image display module 1710 , a second visual field parameter determination module 1720 and a second blood vessel image display module 1730 .
  • the first blood vessel image display module 1710 is configured to acquire the first blood vessel image of the imaging object collected by the angiography device based on the first visual field parameter, and display the first blood vessel image;
  • the second field of view parameter determination module 1720 is configured to determine the second field of view parameter based on the received parameter adjustment instruction and the first field of view parameter; wherein, the parameter adjustment instruction includes a scaling adjustment ratio;
  • the second blood vessel image display module 1730 is configured to determine the second blood vessel image based on the second visual field parameter, and display the second blood vessel image and the first blood vessel image synchronously.
  • the second blood vessel image is determined based on the received parameter adjustment instruction while the first blood vessel image is displayed, and the second blood vessel image is displayed synchronously with the first blood vessel image, thereby solving the problem of different visual fields.
  • the problem of repeated switching and display of scanned images reduces the number of repeated imaging and the amount of radiation, improves the service life of angiography equipment, improves the diagnostic efficiency of angiography, and reduces the diagnostic results of angiography caused by repeated switching. error.
  • the second visual field parameter determination module 1720 includes:
  • a second magnification ratio determining unit configured to determine a second magnification ratio in the second visual field parameter based on the zoom adjustment ratio and the first magnification ratio in the first visual field parameter.
  • the second visual field parameter determination module 1720 includes:
  • a second visual field range determining unit configured to determine a second visual field range in the second visual field parameter based on the zoom adjustment ratio and the first visual field range in the first visual field parameter.
  • the imaging object includes a blood vessel and a catheter inserted into the blood vessel
  • the second visual field parameter determination module 1720 includes:
  • the second visual field center position determination unit is configured to acquire the catheter tip image corresponding to the catheter in the first blood vessel image when the zoom adjustment ratio is greater than one, and use the visual field center position and the visual field range corresponding to the catheter tip image as the second visual field parameter.
  • the second visual field center position and the second visual field range are configured to acquire the catheter tip image corresponding to the catheter in the first blood vessel image when the zoom adjustment ratio is greater than one, and use the visual field center position and the visual field range corresponding to the catheter tip image as the second visual field parameter.
  • the second visual field center position and the second visual field range is configured to acquire the catheter tip image corresponding to the catheter in the first blood vessel image when the zoom adjustment ratio is greater than one, and use the visual field center position and the visual field range corresponding to the catheter tip image as the second visual field parameter.
  • the second blood vessel image display module 1730 includes:
  • a second blood vessel image determining unit configured to acquire a second blood vessel image acquired by the angiography device based on the second visual field parameter; or, when the scaling adjustment ratio is greater than one, based on the second visual field center position and the second visual field in the second visual field parameter
  • the field of view determines the reference blood vessel image in the first blood vessel image, and determines the second blood vessel image based on the second magnification in the second field of view parameter and the reference blood vessel image.
  • the second blood vessel image display module 1730 includes:
  • the first display unit of the second blood vessel image is configured to display the second blood vessel image in a display area different from the first blood vessel image on the display device corresponding to the first blood vessel image; or, display the second blood vessel image in a display area different from the first blood vessel image
  • the first blood vessel image is displayed on the display device.
  • the second blood vessel image display module 1730 includes:
  • updating the second blood vessel image display unit configured to acquire the updated first blood vessel image collected by the angiography device based on the changed first visual field parameter when the first visual field parameter corresponding to the first blood vessel image changes; based on the changed first blood vessel image
  • the first visual field parameter and the parameter adjustment instruction determine to update the second visual field parameter, and synchronously display the updated second blood vessel image determined based on the updated second visual field parameter and the updated first blood vessel image.
  • the image display device provided by the embodiment of the present application can be used to execute the image display method provided by the embodiment of the present application, and has functions and beneficial effects corresponding to the execution method.
  • the units and modules included are only divided according to functional logic, but are not limited to the above-mentioned division, as long as the corresponding functions can be realized;
  • the specific names of the functional units are only for the convenience of distinguishing from each other, and are not used to limit the protection scope of the present application.
  • FIG. 18 is a schematic structural diagram of an angiography apparatus provided by an embodiment of the present application.
  • the embodiment of the present application provides services for the implementation of the image display method described in any of the above-mentioned embodiments of the present application, and the image in the embodiment of the present application can be configured display device.
  • the angiography device includes an imaging component 180, at least one display device 181, and a controller 182, wherein the imaging component 180 is used to acquire a first blood vessel image based on a first visual field parameter; the display device 181 is used to display the first blood vessel image and second blood vessel image; controller 182 includes one or more processors and a memory storing one or more programs that, when executed by the one or more processors, cause the one or more processors to implement The image display method according to any one of the above embodiments.
  • the imaging assembly includes an X-ray source and a detector.
  • the X-ray source is used to emit X-rays; the detector is used to generate a first blood vessel image according to the received X-rays.
  • the X-ray source includes an X-ray high-voltage generator and an X-ray tube.
  • the types of X-ray high-voltage generators include power-frequency high-voltage generators and high-frequency inverter high-voltage generators.
  • the most common application is high-frequency inverter high-voltage generators, and high-frequency inverter high-voltage generators are divided into continuous types.
  • High-frequency inverter high-voltage generator and computer-controlled pulsed high-frequency inverter high-voltage generator wherein the pulse wave of the continuous high-frequency inverter high-voltage generator during pulse collection needs to rely on the grid switch in the X-ray tube Control production.
  • the X-ray tube can receive high-frequency high-voltage output to stimulate and generate X-rays.
  • the specification parameters of the X-ray tube include structural parameters and electrical parameters.
  • the former refers to various parameters determined by the structure of the X-ray tube, such as the inclination angle of the target surface, effective focus, external dimensions, weight, filtration equivalent of the tube wall, anode speed, working temperature and cooling form.
  • Electrical parameters refer to the specification data of the electrical performance of the X-ray tube, such as filament heating voltage and current, maximum tube voltage, tube current, longest exposure time, maximum allowable power and anode heat capacity, etc.
  • X-ray tube is characterized by small focus, high heat capacity, high load, high speed and high heat dissipation rate.
  • the liquid metal bearing technology can be used to make the X-ray tube, which can avoid the bearing wear of the ordinary tube under the condition of high speed, which not only increases the heat dissipation efficiency, but also improves the ability of the X-ray tube to withstand continuous loads, and improves the performance of the ball. tube life, while reducing the intrinsic noise of the device and improving the signal-to-noise ratio of the image.
  • the detector includes a flat panel detector.
  • the placement relationship between the display devices and the placement position of the display devices can be selected according to the user's usage habits.
  • the magnification levels of the blood vessel images corresponding to different display devices 181 may be preset.
  • display device A, display device B, and display device C are respectively The corresponding magnification levels are successively decreased, that is, the blood vessel image with the largest magnification is displayed on the display device A, and the blood vessel image with the smallest magnification is displayed on the display device C.
  • the advantage of this setting is that if the corresponding relationship between the blood vessel image and the display device is not set, when there are many display devices, the user needs to check the blood vessel images on each display device one by one when he wants to view the target blood vessel image to find the The required target blood vessel image, thereby increasing the burden on the user to find the target blood vessel image.
  • the corresponding relationship between the blood vessel image and the display device is set, and the user can follow the corresponding relationship to search for the target blood vessel image, thereby greatly reducing the user's burden of finding the target blood vessel image, thereby improving the diagnostic efficiency of angiography.
  • the angiography apparatus further includes a catheter for injecting the contrast agent into the target blood vessel.
  • the memory in the controller can be used to store software programs, computer-executable programs, and modules, such as program instructions/modules corresponding to the image display method in the embodiments of the present application (for example, the first a blood vessel image display module 1710, a second visual field parameter determination module 1720, and a second blood vessel image display module 1730).
  • the processor executes various functional applications and data processing of the angiography apparatus by running the software programs, instructions and modules stored in the memory, that is, the above-mentioned image display method is implemented.
  • the memory may mainly include a stored program area and a stored data area, wherein the stored program area may store an operating system and an application program required for at least one function; the stored data area may store data created according to the use of the terminal, and the like. Additionally, the memory may include high speed random access memory, and may also include nonvolatile memory, such as at least one magnetic disk storage device, flash memory device, or other nonvolatile solid state storage device. In some instances, the memory may further include memory located remotely from the processor, and these remote memories may be connected to the angiography device through a network. Examples of such networks include, but are not limited to, the Internet, an intranet, a local area network, a mobile communication network, and combinations thereof.
  • the angiography apparatus further includes an input device for receiving input numerical or character information, and generating key signal input related to user settings and function control of the angiography apparatus.
  • the above-mentioned angiography equipment solves the problem of repeatedly switching and displaying scanned images of different fields of view, reduces the number of repeated imaging and the amount of radiation, improves the service life of the angiography equipment, and further improves the diagnostic efficiency of angiography.
  • the error of the diagnostic result of angiography caused by repeated switching is reduced.
  • Embodiments of the present application also provide a storage medium containing computer-executable instructions, where the computer-executable instructions are used to execute an image display method when executed by a computer processor, and the method includes:
  • the second field of view parameter is determined based on the received parameter adjustment instruction and the first field of view parameter; wherein the parameter adjustment instruction includes a scaling adjustment ratio;
  • the second blood vessel image is determined based on the second visual field parameter, and the second blood vessel image and the first blood vessel image are displayed synchronously.
  • the computer storage medium of the embodiments of the present application may adopt any combination of one or more computer-readable media.
  • the computer-readable medium may be a computer-readable signal medium or a computer-readable storage medium.
  • the computer-readable storage medium can be, for example, but not limited to, an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus or device, or a combination of any of the above.
  • a computer-readable storage medium can be any tangible medium that contains or stores a program that can be used by or in conjunction with an instruction execution system, apparatus, or device.
  • a computer-readable signal medium may include a propagated data signal in baseband or as part of a carrier wave, with computer-readable program code embodied thereon. Such propagated data signals may take a variety of forms, including but not limited to electromagnetic signals, optical signals, or any suitable combination of the foregoing.
  • a computer-readable signal medium can also be any computer-readable medium other than a computer-readable storage medium that can transmit, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device .
  • Program code embodied on a computer readable medium may be transmitted using any suitable medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
  • Computer program code for carrying out the operations of the present application may be written in one or more programming languages, including object-oriented programming languages, such as Java, Smalltalk, C++, and conventional procedural languages, or a combination thereof.
  • a programming language such as the "C" language or similar programming language.
  • the program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer, or entirely on the remote computer or server.
  • the remote computer may be connected to the user's computer through any kind of network, including a local area network (LAN) or wide area network (WAN), or may be connected to an external computer (eg, using an Internet service provider to connect through the Internet) ).
  • LAN local area network
  • WAN wide area network
  • Internet service provider to connect through the Internet
  • a storage medium containing computer-executable instructions provided by the embodiments of the present application, the computer-executable instructions of which are not limited to the above method operations, and can also perform related operations in the image display method provided by any embodiment of the present application. .
  • FIG. 19 is a schematic structural diagram of an apparatus for generating a volume reconstructed image provided by an embodiment of the present application.
  • the volume reconstruction image generation device includes: a projection data acquisition module 1910 , a target volume coordinate system generation module 1920 , and a target volume reconstruction image generation module 1930 .
  • the projection data acquisition module 1910 is used to acquire the projection data under each scanning angle;
  • the target volume coordinate system generation module 1920 is used to construct the target volume coordinate system based on the initial volume coordinate system and the desired reconstruction direction;
  • the target volume reconstructed image is generated Module 1930, configured to reconstruct the projection data according to the desired reconstruction direction in the target volume coordinate system to generate a target volume reconstruction image.
  • the device further includes: an initial volume coordinate system determination module; wherein, the initial volume coordinate system determination module is used to perform volume reconstruction on the projection data to obtain an initial volume reconstructed image; based on the The initial volumetric reconstructed image determines the initial volumetric coordinate system.
  • the first coordinate origin of the initial volume coordinate system is any position of the side length of the first vertical axis of the reconstructed volume corresponding to the initial volume reconstructed image;
  • the origin of the first coordinate is taken as the starting point and is perpendicular to the first vertical axis;
  • the first vertical axis takes the origin of the first coordinate as the starting point and is perpendicular to the first horizontal axis.
  • axis and the plane where the first longitudinal axis is located In the plane to which the first vertical axis belongs, the origin of the first coordinate is taken as the starting point and is perpendicular to the first vertical axis;
  • the first vertical axis takes the origin of the first coordinate as the starting point and is perpendicular to the first horizontal axis. axis and the plane where the first longitudinal axis is located.
  • the target volume coordinate system generation module 1920 is further configured to determine the reconstructed volume to which the initial volume coordinate system belongs;
  • a first current plane is determined to be perpendicular to the desired reconstruction direction based on the desired reconstruction direction;
  • a second current plane is constructed based on the first longitudinal axis and the first vertical axis, and the midpoint of the intersection of the second current plane and the first current plane is used as the second coordinate origin of the target volume coordinates, and taking the desired reconstruction direction as the second vertical axis of the target volume coordinate system;
  • the length of the side to which the second coordinate origin belongs is taken as the second vertical axis of the target volume coordinate system, and the second coordinate origin is taken as the starting point, and the second vertical axis is perpendicular to the second vertical axis. and the vector of the plane where the second vertical axis is located as the second horizontal axis.
  • the target volume reconstructed image generation module 1930 is further configured to: The pixel range of the coordinate system, to determine the pixel range under the target volume coordinate system;
  • the projection data is reconstructed according to the desired reconstruction direction to generate the target volume reconstruction image.
  • the device further includes: a preprocessing module; wherein, the preprocessing module is used for preprocessing the projection data.
  • the device further includes: a rendering module; wherein, the rendering module is used to obtain an image rendering instruction of the reconstructed image of the target volume;
  • the pixel points of the target volume reconstructed image are rendered, and the rendered target volume reconstructed image is displayed.
  • a target volume coordinate system is constructed based on the initial volume coordinate system and the desired reconstruction direction by acquiring projection data at each scanning angle, and in the target volume coordinate system, according to the desired reconstruction direction
  • the projection data is reconstructed to generate a target volume reconstructed image.
  • FIG. 20 is a schematic structural diagram of a volume reconstruction image generation system provided by an embodiment of the present application.
  • the volume reconstruction image generation system includes: a control device 1 and an image acquisition device 2 .
  • the image acquisition device 2 is used for scanning the scanning object under each scanning angle, so as to obtain projection data under each scanning angle.
  • Figure 21 shows a block diagram of an exemplary image capture device 2 suitable for implementing embodiments of the present application.
  • the image capturing device 2 shown in FIG. 21 is only an example, and should not impose any limitations on the functions and scope of use of the embodiments of the present application.
  • the image capture device 2 is represented in the form of a general-purpose computing device.
  • Components of the image capture device 2 may include, but are not limited to, one or more processors or processing units 16, a system memory 28, and a bus 18 connecting various system components including the system memory 28 and the processing unit 16.
  • Bus 18 represents one or more of several types of bus structures, including a memory bus or memory controller, a peripheral bus, a graphics acceleration port, a processor, or a local bus using any of a variety of bus structures.
  • these architectures include, but are not limited to, Industry Standard Architecture (ISA) bus, Micro Channel Architecture (MAC) bus, Enhanced ISA bus, Video Electronics Standards Association (VESA) local bus, and Peripheral Component Interconnect ( PCI) bus.
  • Image acquisition device 2 typically includes a variety of computer system readable media. These media can be any available media that can be accessed by the image capture device 2, including volatile and non-volatile media, removable and non-removable media.
  • System memory 28 may include computer system readable media in the form of volatile memory, such as random access memory (RAM) 30 and/or cache memory.
  • Image acquisition device 2 may further include other removable/non-removable, volatile/non-volatile computer system storage media.
  • storage system 34 may be used to read and write to non-removable, non-volatile magnetic media (not shown in FIG. 7, commonly referred to as a "hard drive”).
  • a disk drive for reading and writing to removable non-volatile magnetic disks eg "floppy disks”
  • removable non-volatile optical disks eg CD-ROM, DVD-ROM
  • each drive may be connected to bus 18 through one or more data media interfaces.
  • the memory 28 may include at least one program product having a set of program modules (e.g., the projection data acquisition module 1910 of the volume reconstructed image generation device, the target volume coordinate system generation module 1920, and the target volume reconstruction image generation module 1930) program modules.
  • the modules are configured to perform the functions of the various embodiments of the present application.
  • a program/utility 44 having a set of program modules 46 (e.g., the projection data acquisition module 1910 of the volume reconstructed image generation device, the target volume coordinate system generation module 1920, and the target volume reconstructed image generation module 1930), which may be stored, for example, in the memory 28 , such program modules 46 include, but are not limited to, an operating system, one or more application programs, other program modules, and program data, each or some combination of these examples may include an implementation of a network environment.
  • Program modules 46 generally perform the functions and/or methods of the embodiments described herein.
  • the image capture device 2 may also communicate with one or more external devices 14 (eg, keyboards, pointing devices, displays 24, etc.), and may also communicate with one or more devices that enable a user to interact with the image capture device 2, and/or Or with any device (eg, network card, modem, etc.) that enables the image capture device 2 to communicate with one or more other computing devices. Such communication may take place through input/output (I/O) interface 22 . Also, the image capture device 2 may also communicate with one or more networks (such as a local area network (LAN), a wide area network (WAN) and/or a public network such as the Internet) through a network adapter 20. As shown, the network adapter 20 communicates with other modules of the image capture device 2 via the bus 18 .
  • LAN local area network
  • WAN wide area network
  • public network such as the Internet
  • image capture device 2 may be used in conjunction with the image capture device 2, including but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tapes drives and data backup storage systems, etc.
  • the processing unit 16 executes various functional applications and data processing by running the programs stored in the system memory 28, for example, implementing a volume reconstruction image generation method provided by the embodiments of the present application, including:
  • the target volume coordinate system is formed
  • the projection data is reconstructed according to the desired reconstruction direction to generate a target volume reconstruction image.
  • the processing unit 16 executes various functional applications and data processing by running the programs stored in the system memory 28, for example, implementing a volume reconstruction image generation method provided by the embodiments of the present application.
  • processor can also implement the technical solution of the volume reconstruction image generation method provided by any embodiment of the present application.
  • Embodiments of the present application further provide a computer-readable storage medium on which a computer program is stored, and when the program is executed by a processor, implements a volume reconstruction image generation method as provided by the embodiments of the present application, including:
  • the projection data is reconstructed according to the desired reconstruction direction to generate a target volume reconstruction image.
  • the computer program stored thereon is not limited to the above method operations, and can also perform the operations in a volume reconstruction image generation method provided by any embodiment of the present application. related operations.
  • the computer storage medium of the embodiments of the present application may adopt any combination of one or more computer-readable media.
  • the computer-readable medium may be a computer-readable signal medium or a computer-readable storage medium.
  • the computer-readable storage medium can be, for example, but not limited to, an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, system or device, or a combination of any of the above.
  • a computer-readable storage medium can be any tangible medium that contains or stores a program that can be used by or in conjunction with an instruction execution system, system, or device.
  • the computer readable signal medium may include projection data, an initial volume coordinate system, a desired reconstruction direction, a target volume coordinate system, etc., carrying computer readable program code therein. This propagated projection data, the initial volume coordinate system, the desired reconstruction direction, and the target volume coordinate system are in the form.
  • a computer-readable signal medium can also be any computer-readable medium, other than a computer-readable storage medium, that can transmit, propagate, or transport the program for use by or in connection with the instruction execution system, system, or device .
  • Program code embodied on a computer readable medium may be transmitted using any suitable medium, including - but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
  • Computer program code for carrying out the operations of the present application may be written in one or more programming languages, including object-oriented programming languages—such as Java, Smalltalk, C++, but also conventional procedural languages, or a combination thereof.
  • Programming Language - such as "C" language or similar programming language.
  • the program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer, or entirely on the remote computer or server.
  • the remote computer may be connected to the user's computer through any kind of network, including a local area network (LAN) or a wide area network (WAN), or may be connected to an external computer (eg, using an Internet service provider through Internet connection).
  • LAN local area network
  • WAN wide area network
  • the modules included are only divided according to functional logic, but are not limited to the above-mentioned division, as long as the corresponding functions can be realized;
  • the specific names of the functional units are only for the convenience of distinguishing from each other, and are not used to limit the protection scope of the present application.

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Medical Informatics (AREA)
  • Radiology & Medical Imaging (AREA)
  • General Health & Medical Sciences (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Heart & Thoracic Surgery (AREA)
  • High Energy & Nuclear Physics (AREA)
  • Biomedical Technology (AREA)
  • Optics & Photonics (AREA)
  • Molecular Biology (AREA)
  • Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • Pathology (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Biophysics (AREA)
  • Quality & Reliability (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Apparatus For Radiation Diagnosis (AREA)

Abstract

一种C形臂设备的动态透视方法及系统。方法包括在一个拍摄周期内对主体进行拍摄,在拍摄过程中获取射线源在第一能量下照射主体的第一透视数据,以及获取射线源在不同于第一能量的第二能量下照射主体的第二透视数据(210);在多个连续的拍摄周期对主体进行拍摄(220);根据多个连续拍摄周期的每个拍摄周期中获取的第一透视数据和第二透视数据,显示主体的的动态图像(230)。

Description

C形臂设备的动态透视方法、装置及系统
相关申请
本申请要求2020年09月11日申请的,申请号为202010957835.1,名称为“一种C形臂设备的动态透视方法和系统”;2020年09月24日申请的,申请号为202011017481.9,名称为“一种图像显示方法、装置、血管造影设备及存储介质”;2020年09月24日申请的,申请号为202011019279.X,名称为“一种容积重建图像生成方法、装置、系统及存储介质”;2020年09月24日申请的,申请号为202011019324.1,名称为“医疗成像设备的成像定位方法、装置、设备及存储介质”的中国专利申请的优先权,在此将其全文引入作为参考。
技术领域
本申请涉及图像处理技术领域,更具体的说,涉及一种C形臂设备的动态透视方法、装置及系统。
背景技术
放射线设备(如DSA设备、DR设备、X光机、乳腺X射线机等)通过发出放射线(如,X射线)对主体进行拍摄和/或治疗。在进行手术/诊断时,如果医护人员可以动态了解主体的病灶和/或各组织器官的变化、移动情况,则能够更快速、准确地获得诊断结果或进行临床操作。
因此,有必要提供一种动态透视方法和系统,以更好地辅助诊断/治疗。
发明内容
有鉴于此,本申请公开一种C形臂设备的动态透视方法、装置及系统,以便医护人员能够动态了解主体的病灶和/或各组织器官的变化、移动情况,从而更快速、准确地获得诊断结果或进行临床操作。
第一方面,本申请实施例提供一种C形臂设备的动态透视方法,包括:在一个拍摄周期内对主体进行拍摄,在所述拍摄过程中获取射线源在第一能量下照射所述主体的第一透视数据,以及获取所述射线源在不同于所述第一能量的第二能量下照射所述主体的第二透视数据;在多个连续的拍摄周期对所述主体进行所述拍摄;根据所述多个连续拍摄周期的每个拍摄周期中获取的所述第一透视数据和所述第二透视数据,显示所述主体的动态图像。
第二方面,本申请实施例提供一种医疗成像设备的成像定位方法,该方法包括:
获取与成像对象对应的虚拟人体模型,并获取与用户操作指令对应的第一位置信息;
基于所述第一位置信息,确定所述虚拟人体模型中与所述第一位置信息对应的人体内部图像,将所述人体内部图像进行显示;
如果所述第一位置信息对应的所述人体内部图像与目标拍摄位置对应,则根据所述第一位置信息确定所述医疗成像设备对应的目标成像位置;
如果所述第一位置信息对应的所述人体内部图像与目标拍摄位置不对应,则继续获取不同于所述第一位置信息的第二位置信息,并根据所述第二位置信息对应的人体内部图像确定与医疗成像设备对应的目标成像位置。
本发明实施例通过基于虚拟人体模型确定与用户输入的第一位置信息对应的人体内部图像,并通过判断人体内部图像与目标拍摄位置是否对应,得到医疗成像设备对应的目标成像位置,解决了成像操作过程中辐射对人体的损伤问题,使得用户在成像定位的过程中可任意设置参数,且不会对人体造成任何损伤,进而也保证了定位的准确度。
第三方面,本申请实施例提供一种图像显示方法,该方法包括:
获取血管造影设备基于第一视野参数采集到的成像对象的第一血管图像,并显示所述第一血管图像;
基于接收到的参数调整指令以及所述第一视野参数确定第二视野参数;其中,所述参数调整指令中包含缩放调整比例;
基于所述第二视野参数确定第二血管图像,并将所述第二血管图像与所述第一血管图像进行同步显示。
本申请实施例通过在显示第一血管图像的同时,基于接收到的参数调整指令确定第二血管图像,并将第二血管图像与第一血管图像同步显示,解决了对不同视野的扫描图像进行反复切换显示的问题,降低了重复成像的次数和辐射量,提高了血管造影设备的使用寿命,进而也提高了血管造影的诊断效率,降低了反复切换带来的血管造影的诊断结果的误差。
第四方面,本申请实施例提供一种容积重建图像生成方法,包括:
获取各扫描角度下的投影数据;
基于初始容积坐标系和期望重建方向,构建目标容积坐标系;
在所述目标容积坐标系下,根据所述期望重建方向对所述投影数据进行重建,生成目标容积重建图像。
本实施例提供的技术方案,通过获取各扫描角度下的投影数据,基于初始容积坐标系和期望重建方向,构建目标容积坐标系,在所述目标容积坐标系下,根据所述期望重建方向对所述投影数据进行重建,生成目标容积重建图像。解决了现有技术中只能在平行于探测器的方向上获得一张较好的图像,而不能获得其他方向上的信息的问题。通过设置不同的期望重建方向,达到在不同的期望重建方向进行重建,得到各个方向上的分辨率较好容积重建图像的目的,有利于用户对多个重建方向上的的容积重建图像进行有效分析,并进行目标定位。
第五方面,本申请实施例提供一种C形臂设备的动态透视系统,包括拍摄模块和显示模块;所述拍摄模块用于在一个拍摄周期内对主体进行拍摄,在所述拍摄过程中获取射线源在第一能量下照射所述主体的第一透视数据,以及获取所述射线源在不同于所述第一能量的第二能量下照射所述主体的第二透视数据;所述拍摄模块还用于在多个连续的拍摄周期对所述主体进行所述拍摄;所述显示模块用于根据所述多个连续拍摄周期的每个拍摄周期中获取的所述第一透视数据和所述第二透视数据,显示所述主体的动态图像。
第六方面,本申请实施例提供一种医疗成像设备的成像定位装置,该装置包括:
虚拟人体模型获取模块,用于获取与成像对象对应的虚拟人体模型,并获取与用户操作指令对应的第一位置信息;
人体内部图像显示模块,用于基于所述第一位置信息,确定所述虚拟人体模型中与所述第一位置信息对应的人体内部图像,将所述人体内部图像进行显示;
第一目标成像位置确定模块,用于如果所述第一位置信息对应的所述人体内部图像与目标拍摄位置对应,则根据所述第一位置信息确定所述医疗成像设备对应的目标成像位置;
第二目标成像位置确定模块,用于如果所述第一位置信息对应的所述人体内部图像与目标拍摄位置不对应,则继续获取不同于所述第一位置信息的第二位置信息,并根据所述第二位置信息对应的人体内部图像确定与医疗成像设备对应的目标成像位置。
第七方面,本申请实施例提供一种图像显示装置,该装置包括:
第一血管图像显示模块,用于获取血管造影设备基于第一视野参数采集到的成像对象的第一血管图像,并显示所述第一血管图像;
第二视野参数确定模块,用于基于接收到的参数调整指令和所述第一视野参数确定第二视野参数;其中,所述参数调整指令中包含缩放调整比例;
第二血管图像显示模块,用于基于所述第二视野参数确定第二血管图像,并将所述第二血管图像与所述第一血管图像进行同步显示。
第八方面,本申请实施例提供一种血管造影设备,该血管造影设备包括成像组件、至少一个显示设备和控制器;
其中,所述成像组件,用于基于第一视野参数采集第一血管图像;
所述显示设备,用于显示第一血管图像和第二血管图像;
所述控制器包括一个或多个处理器以及存储有一个或多个程序的存储器,当所述一个或多个程序被所述一个或多个处理器执行时,使得所述一个或多个处理器实现上述所涉及的任一所述的图像显示方法。
第九方面,本申请实施例还提供了一种容积重建图像生成装置,包括:
投影数据获取模块,用于获取各扫描角度下的投影数据;
目标容积坐标系生成模块,用于基于初始容积坐标系和期望重建方向,构建目标容积坐 标系;
目标容积重建图像生成模块,用于在所述目标容积坐标系下,根据所述期望重建方向对所述投影数据进行重建,生成目标容积重建图像。
第十方面,本申请实施例还提供了一种容积重建图像生成系统,包括:控制设备和图像采集设备;
其中,所述控制设备包括存储器、处理器及存储在存储器上并可在处理器上运行的计算机程序,所述处理器执行所述计算机程序时实现如第一方面中任一项所述的容积重建图像生成方法;
所述图像采集设备,用于对扫描对象在所述各扫描角度下进行扫描,以得到各所述扫描角度下的投影数据。
第十一方面,本申请实施例提供一种C形臂设备的动态透视装置,所述装置包括至少一个处理器和至少一个存储设备,所述存储设备用于存储指令,当所述至少一个处理器执行所述指令时,实现本申请任一实施例所述的动态透视方法。
第十二方面,本申请实施例提供一种C形臂成像系统,包括射线源、探测器、存储器以及显示器,所述射线源包括球管、高压发生器以及高压控制模块;其中:所述高压控制模块,控制所述高压发生器在第一能量和第二能量之间往复切换;所述射线源,在所述高压发生器的驱动下,在一个拍摄周期内对主体发出射线,所述探测器获取所述主体在第一能量射线照射下的第一透视数据以及在第二能量射线照射下的第二透视数据;所述存储器,存储多个连续拍摄周期的每个拍摄周期中获取的所述第一透视数据和所述第二透视数据;所述存储器还存储有图像处理单元,所述图像处理单元对每个拍摄周期内的第一透视数据和第二透视数据减影处理以获得所述主体在每个拍摄周期内的图像进而得到在多个拍摄周期的动态图像;显示器,用于显示所述主体在多个拍摄周期的动态图像。
第十三方面,本申请实施例提供一种医疗成像设备,该医疗成像设备包括:
一个或多个处理器;
存储器,用于存储一个或多个程序;
当所述一个或多个程序被所述一个或多个处理器执行时,使得所述一个或多个处理器实现上述所涉及的任一所述的医疗成像设备的成像定位方法。
第十四方面,本申请实施例提供一种计算机可读存储介质,所述存储介质存储计算机指令,当计算机读取所述存储介质中的所述计算机指令后,所述计算机执行本申请任一实施例所述的动态透视方法。
第十五方面,本申请实施例还提供了一种包含计算机可执行指令的存储介质,所述计算机可执行指令在由计算机处理器执行时用于执行上述所涉及的任一所述的医疗成像设备的成像定位方法。
第十六方面,本申请实施例还提供了一种包含计算机可执行指令的存储介质,所述计算机可执行指令在由计算机处理器执行时用于执行上述所涉及的任一所述的图像显示方法。
第十七方面,本申请实施例还提供了一种包含计算机可执行指令的存储介质,所述计算机可执行指令在由计算机处理器执行时实现如第一方面中任一项所述的容积重建图像生成方法。
附图说明
为了更清楚地说明本申请实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据公开的附图获得其他的附图。
图1为本申请实施例提供的C形臂设备的动态透视系统的应用场景示意图;
图2为本申请实施例提供的C形臂设备的动态透视方法的示例性流程图;
图3是本申请实施例提供的一种医疗成像设备的成像定位方法的流程图;
图4是本申请实施例提供的一种医疗成像设备的成像定位方法的流程图;
图5是本申请实施例提供的一种交互界面的示意图;
图6是本申请实施例提供的一种医疗成像设备的成像定位方法的流程图;
图7是本申请实施例提供的一种虚拟人体模型的成像场景的示意图;
图8是本申请实施例提供的一种图像显示方法的流程图;
图9是本申请实施例提供的一种图像显示方法的流程图;
图10为本申请实施例提供的一种容积重建图像生成方法的流程示意图;
图11为本申请实施例提供的一种初始容积坐标系的定义的示意图;
图12为本申请实施例提供的一种目标容积坐标系的定义的示意图;
图13为本申请实施例提供的一种容积重建图像生成方法的流程示意图;
图14为本申请实施例提供的C形臂设备的动态透视系统的示例性模块图;
图15是本申请实施例提供的一种医疗成像设备的成像定位装置的示意图;
图16是本申请实施例提供的一种医疗成像设备的结构示意图;
图17是本申请实施例提供的一种图像显示装置的示意图;
图18是本申请实施例提供的一种血管造影设备的结构示意图;
图19为本申请实施例提供的一种容积重建图像生成装置的结构示意图;
图20为本申请实施例提供的一种容积重建图像生成系统的结构示意图;
图21为本申请实施例提供的一种控制设备的结构示意图。
具体实施方式
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
为了更清楚地说明本申请实施例的技术方案,下面将对实施例描述中所需要使用的附图作简单的介绍。显而易见地,下面描述中的附图仅仅是本申请的一些示例或实施例,对于本领域的普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图将本申请应用于其它类似情景。除非从语言环境中显而易见或另做说明,图中相同标号代表相同结构或操作。
应当理解,本文使用的“系统”、“装置”、“单元”和/或“模组”是用于区分不同级别的不同组件、元件、部件、部分或装配的一种方法。然而,如果其他词语可实现相同的目的,则可通过其他表达来替换所述词语。
如本申请和权利要求书中所示,除非上下文明确提示例外情形,“一”、“一个”、“一种”和/或“该”等词并非特指单数,也可包括复数。一般说来,术语“包括”与“包含”仅提示包括已明确标识的步骤和元素,而这些步骤和元素不构成一个排它性的罗列,方法或者设备也可能包含其它的步骤或元素。
虽然本申请对根据本申请的实施例的系统中的某些模块或单元做出了各种引用,然而,任何数量的不同模块或单元可以被使用并运行在客户端和/或服务器上。所述模块仅是说明性的,并且所述系统和方法的不同方面可以使用不同模块。
本申请中使用了流程图用来说明根据本申请的实施例的系统所执行的操作。应当理解的是,前面或后面操作不一定按照顺序来精确地执行。相反,可以按照倒序或同时处理各个步骤。同时,也可以将其他操作添加到这些过程中,或从这些过程移除某一步或数步操作。
在本申请的描述中,需要理解的是,术语“中心”、“纵向”、“横向”、“长度”、“宽度”、“厚度”、“上”、“下”、“前”、“后”、“左”、“右”、“竖直”、“水平”、“顶”、“底”、“内”、“外”、“顺时针”、“逆时针”、“轴向”、“径向”、“周向”等指示的方位或位置关系为基于附图所示的方位或位置关系,仅是为了便于描述本申请和简化描述,而不是指示或暗示所指的装置或元件必须具有特定的方位、以特定的方位构造和操作,因此不能理解为对本申请的限制。
此外,术语“第一”、“第二”仅用于描述目的,而不能理解为指示或暗示相对重要性或者隐含指明所指示的技术特征的数量。由此,限定有“第一”、“第二”的特征可以明示或者隐含地包括至少一个该特征。在本申请的描述中,“多个”的含义是至少两个,例如两个,三个等,除非另有明确具体的限定。
在本申请中,除非另有明确的规定和限定,术语“安装”、“相连”、“连接”、“固定”等术语 应做广义理解,例如,可以是固定连接,也可以是可拆卸连接,或成一体;可以是机械连接,也可以是电连接;可以是直接相连,也可以通过中间媒介间接相连,可以是两个元件内部的连通或两个元件的相互作用关系,除非另有明确的限定。对于本领域的普通技术人员而言,可以根据具体情况理解上述术语在本申请中的具体含义。
在本申请中,除非另有明确的规定和限定,第一特征在第二特征“上”或“下”可以是第一和第二特征直接接触,或第一和第二特征通过中间媒介间接接触。而且,第一特征在第二特征“之上”、“上方”和“上面”可是第一特征在第二特征正上方或斜上方,或仅仅表示第一特征水平高度高于第二特征。第一特征在第二特征“之下”、“下方”和“下面”可以是第一特征在第二特征正下方或斜下方,或仅仅表示第一特征水平高度小于第二特征。
需要说明的是,当元件被称为“固定于”或“设置于”另一个元件,它可以直接在另一个元件上或者也可以存在居中的元件。当一个元件被认为是“连接”另一个元件,它可以是直接连接到另一个元件或者可能同时存在居中元件。本文所使用的术语“垂直的”、“水平的”、“上”、“下”、“左”、“右”以及类似的表述只是为了说明的目的,并不表示是唯一的实施方式。
实施例一
图1是根据本申请一些实施例所示的C形臂设备的动态透视系统的应用场景示意图。动态透视系统100可以包括透视设备110、网络120、至少一个终端130、处理设备140和存储设备150。该系统100中的各个组件之间可以通过网络120互相连接。例如,透视设备110和至少一个终端130可以通过网络120连接或通信。
透视设备110可以包括DSA(数字减影血管造影技术)、数字化X射线摄影设备(Digital Radiography,DR)、计算机X射线摄影设备(Computed Radiography,CR)、数字荧光X线摄影设备(Digital Fluorography,DF)、CT扫描仪、磁共振扫描仪、乳腺X线机、C形臂设备等。在一些实施例中,透视设备110可以包括机架、探测器、检测区域、扫描床和射线源。机架可以用于支撑探测器和射线源。扫描床用于放置主体以进行扫描。主体可以包括患者、模体或其他被扫描的物体。射线源可以向主体发射X射线以照射主体。探测器可以用于接收X射线。通过对主体进行多个拍摄周期的拍摄(即,照射),透视设备110可获取多个拍摄周期的透视数据,以生成(或重建)主体在多个拍摄周期的动态图像。
网络120可以包括能够促进动态透视系统100的信息和/或数据交换的任何合适的网络。在一些实施例中,动态透视系统100的至少一个组件(例如,透视设备110、处理设备140、存储设备150、至少一个终端130)可以通过网络120与动态透视系统100中至少一个其他组件交换信息和/或数据。例如,处理设备140可以通过网络120从透视设备110中获得主体的图像。网络120可以包括公共网络(例如,因特网)、专用网络(例如,局部区域网络(LAN))、有线网络、无线网络(例如,802.11网络、Wi-Fi网络)、帧中继网络、虚拟专用网络(VPN)、 卫星网络、电话网络、路由器、集线器、交换机等或其任意组合。例如,网络120可以包括有线网络、有线网络、光纤网络、电信网络、内联网、无线局部区域网络(WLAN)、城域网(MAN)、公共电话交换网络(PSTN)、蓝牙TM网络、ZigBeeTM网络、近场通信(NFC)网络等或其任意组合。在一些实施例中,网络120可以包括至少一个网络接入点。例如,网络120可以包括有线和/或无线网络接入点,例如基站和/或互联网交换点,动态透视系统100的至少一个组件可以通过接入点连接到网络120以交换数据和/或信息。
至少一个终端130可以与透视设备110、处理设备140和/或存储设备150通信和/或连接。例如,处理设备140获取的第一透视数据、第二透视数据可以存储在存储设备150中。在一些实施例中,至少一个终端130可以包括移动设备131、平板计算机132、膝上型计算机133等或其任意组合。例如,移动设备131可以包括移动控制手柄、个人数字助理(PDA)、智能手机等或其任意组合。在一些实施例中,至少一个终端130可以包括显示器,显示器可以用于显示动态透视过程的相关信息(如主体的动态图像)。
在一些实施例中,至少一个终端130可以包括输入设备、输出设备等。输入设备可以选用键盘输入、触摸屏(例如,具有触觉或触觉反馈)输入、语音输入、眼睛跟踪输入、手势跟踪输入、大脑监测系统输入、图像输入、视频输入或任何其他类似的输入机制。通过输入设备接收的输入信息可以通过如总线传输到处理设备140,以进行进一步处理。其他类型的输入设备可以包括光标控制装置,例如,鼠标、轨迹球或光标方向键等。在一些实施例中,操作者(如,技师或医生)可以通过输入设备输入反映所述用户所选择的动态图像类别的指令。输出设备可以包括显示器、扬声器、打印机等或其任意组合。输出设备可以用于输出处理设备140确定的动态图像等。在一些实施例中,至少一个终端130可以是处理设备140的一部分。
处理设备140可以处理从透视设备110、存储设备150、至少一个终端130或动态透视系统100的其他组件获得的数据和/或信息。例如,处理设备140可以从透视设备110中获取主体的透视数据。在一些实施例中,处理设备140可以是单一服务器或服务器组。服务器组可以是集中式的或分布式的。在一些实施例中,处理设备140可以是本地或远程的。例如,处理设备140可以通过网络120从透视设备110、存储设备150和/或至少一个终端130访问信息和/或数据。又例如,处理设备140可以直接连接到透视设备110、至少一个终端130和/或存储设备150以访问信息和/或数据。在一些实施例中,处理设备140可以在云平台上实现。例如,云平台可以包括私有云、公共云、混合云、社区云、分布式云、云间云、多云等或其任意组合。
存储设备150可以存储数据、指令和/或任何其他信息。例如,历史拍摄协议等。在一些实施例中,存储设备150可以存储从透视设备110、至少一个终端130和/或处理设备140获得的数据。在一些实施例中,存储设备150可以存储处理设备140用来执行或使用来完成本 申请中描述的示例性方法的数据和/或指令。在一些实施例中,存储设备150可以包括大容量存储器、可移动存储器、易失性读写存储器、只读存储器(ROM)等或其任意组合。在一些实施例中,存储设备150可以在云平台上实现。
在一些实施例中,存储设备150可以连接到网络120以与动态透视系统100中的至少一个其他组件(例如,处理设备140、至少一个终端130)通信。动态透视系统100中的至少一个组件可以通过网络120访问存储设备150中存储的数据(例如,透视数据)。在一些实施例中,存储设备150可以是处理设备140的一部分。
应该注意的是,上述描述仅出于说明性目的而提供,并不旨在限制本申请的范围。对于本领域普通技术人员而言,在本申请内容的指导下,可做出多种变化和修改。可以以各种方式组合本申请描述的示例性实施例的特征、结构、方法和其他特征,以获得另外的和/或替代的示例性实施例。例如,存储设备150可以是包括云计算平台的数据存储设备150,例如公共云、私有云、社区和混合云等。然而,这些变化与修改不会背离本申请的范围。
在一些实施例中,本申请还涉及一种C形臂成像系统。该C形臂成像系统可以包括射线源、探测器、存储器以及显示器,射线源包括球管、高压发生器以及高压控制模块。在一些实施例中,所述高压控制模块可以控制所述高压发生器在第一能量和第二能量之间往复切换;所述射线源可以在所述高压发生器的驱动下,在一个拍摄周期内对主体发出射线,所述探测器可以获取所述主体在第一能量射线照射下的第一透视数据以及在第二能量射线照射下的第二透视数据;所述存储器可以存储多个连续拍摄周期的每个拍摄周期中获取的所述第一透视数据和所述第二透视数据;所述存储器还存储有图像处理单元,所述图像处理单元对每个拍摄周期内的第一透视数据和第二透视数据减影处理以获得所述主体在每个拍摄周期内的图像进而得到在多个拍摄周期的动态图像;显示器可以用于显示所述主体在多个拍摄周期的动态图像。
在一些实施例中,医护人员需要经常对主体进行拍摄,以了解主体的身体情况,然而,一张图片只能反映某一时刻主体的病灶和/或各组织器官的情况,往往无法满足医护人员的需求。特别的,在对主体进行手术/诊断等临床操作时,医护人员需要实时了解主体的整体、局部(例如,骨骼、软组织等)或某个病灶的病变、移动情况,因此需要利用具有动态透视功能的透视设备110在一个或多个拍摄周期内对主体进行连续的拍摄以获取动态透视图像。然而,在一些实施例中,具有透视功能的透视设备通常仅能够在一种能量下照射主体来获得透视图像,导致无法有效利用光电吸收效应和康普顿散射效应获取医护人员所需的图像,降低了医护人员的诊断和临床操作效率。
因此,本申请一些实施例提供了一种动态透视方法,使得射线源能够在不同能量下轮流对主体进行照射,从而获得主体在不同能量照射下的透视数据,然后基于光电吸收效应和康普顿散射效应获取医护人员所需的主体动态透视图像,有效提高医护人员的临床操作效率, 且能够更好地辅助医护人员进行诊断/治疗。
图2是根据本申请一些实施例所示的动态透视方法的示例性流程图。具体的,动态透视方法200可以由动态透视系统100(如处理设备140)执行。例如,动态透视方法200可以以程序或指令的形式存储在存储装置(如存储设备150)中,当动态透视系统100(如处理设备140)执行该程序或指令时,可以实现动态透视方法200。
步骤210、在一个拍摄周期内对主体进行拍摄,在所述拍摄过程中获取射线源在第一能量下照射所述主体的第一透视数据,以及获取所述射线源在不同于所述第一能量的第二能量下照射所述主体的第二透视数据。在一些实施例中,步骤210可以由拍摄模块310执行。
主体可以是指位于透视设备110下的某一位置接受射线源照射的对象。在一些实施例中,主体可以包括针对其某一部位(如,头部、胸部)进行拍摄的对象,例如,患者、体检者等。在一些实施例中,主体也可以为特定的器官、组织、部位等。
拍摄周期可以理解为一个时间段,在该时间段内,射线源将分别以第一能量和第二能量的射线对主体进行拍摄。在一些实施例中,射线源在第一能量下拍摄主体和射线源在第二能量下拍摄主体可以是连续的,即射线源在第一能量下拍摄完毕后立刻进行射线源在第二能量下的拍摄。为了方便说明,射线源在第一能量下拍摄(即,照射)主体也可以称为第一能量拍摄,射线源在第二能量下拍摄(即,照射)主体也可以称为第二能量拍摄。在一些实施例中,可以通过高压控制模块控制高压发生器在第二能量和第一能量之间进行切换,从而实现第一能量拍摄和第二能量拍摄的切换。在一些实施例中,第一能量拍摄和第二能量拍摄之间可以具有一定时间间隔。在一些实施例中,拍摄周期可以为1/25秒、1/50秒等。
在一些实施例中,透视设备110可以包括多个射线源(例如,第一射线源和第二射线源),各个射线源发射出的射线能量不同。例如,第一射线源可以在第一能量下发射第一射线,第二射线源在第二能量下发射第二射线,第一能量可以低于或高于第二能量。在一些实施例中,透视设备110也可以只有一个射线源,该射线源在高压控制模块的控制下可以发射不同能量射线。
在一些实施例中,第一能量的能量和第二能量的能量差可以为5KeV~120KeV。在一些优选实施例中,第一能量和第二能量的能量差可以为10KeV~90KeV。在一些优选实施例中,第一能量和第二能量的能量差可以为20KeV~100KeV。在一些实施例中,第一能量的可用能量范围与第二能量的可用能量范围可以是部分交叠的或者间隔开的。例如,第一能量的可用能量范围为50KeV~100KeV,第二能量的可用能量范围为70KeV~130KeV。又例如,第一能量的可用能量范围为60KeV~90KeV,第二能量的可用能量范围为100KeV~120KeV。在一些实施例中,第一能量可以为70KeV,第二能量可以为120KeV。
在一些实施例中,处理设备140可以获取主体在不同能量的射线(或称为射束)照射下的透视数据。例如,处理设备140可以获取主体在第一能量的射线照射下的第一透视数据和 主体在第二能量的射线照射下的第二透视数据。在一些实施例中,透视数据可以是指射线穿过待照射的主体后被透视设备110的探测器所探测到的数据,透视设备110的存储器可以存储多个连续拍摄周期的每个拍摄周期中获取的第一透视数据和第二透视数据。进一步的,处理设备140或者存储器中的图像处理单元可以基于透视数据进行处理(如,减影处理)以得到多个拍摄周期的图像,进而得到多个拍摄周期的动态图像。在一些实施例中,透视数据也可以理解为基于探测器所探测到的数据进行重建后的图像。在一些实施例中,可以采用半重建法、节段重建法等方法进行重建图像。在一些实施例中,处理设备140可以将透视数据进行存储,以便后续对透视数据进行处理实现动态透视。
需要说明的是,本申请没有对一个拍摄周期内的拍摄步骤进行限制。动态透视系统100可以首先利用射线源在第一能量下对主体进行照射,获取第一透视数据,然后再利用射线源在第二能量下对主体进行照射,获取第二透视数据。或者,成像系统100也可以首先利用射线源在第二能量下对主体进行照射,然后再利用射线源在第一能量下对主体进行照射。具体拍摄步骤可以根据实际情况选择。
步骤220、在多个连续的拍摄周期对所述主体进行所述拍摄。在一些实施例中,步骤220可以由拍摄模块310执行。
在一些实施例中,多个连续的拍摄周期可以包括第一拍摄周期、第二拍摄周期……第N拍摄周期,每个拍摄周期完成后即进入下一个拍摄周期。在一些实施例中,一个拍摄周期可以以拍摄(如第一能量拍摄)开始时刻为周期起始时刻,以拍摄(如第二能量拍摄)完成时刻为周期结束时刻。在此情况下,当前一个拍摄周期中的射线源在第二能量下完成对主体的照射后可以直接进行后一个拍摄周期,即进行射线源在第一能量下对主体的照射的步骤。在一些实施例中,对于一个拍摄周期,在第一能量拍摄开始前和/或第二能量拍摄完成后也可以包括一定的间隔时间。在一具体实施例中,一个拍摄周期可以以第一能量拍摄的开始时刻为周期起始时刻,当第一能量拍摄完成后可以间隔第一时间段后再进行第二能量拍摄;当第二能量拍摄完成后可以间隔第二时间段后再结束该拍摄周期并进行下一个拍摄周期。在一些实施例中,第一时间段和第二时间段可以相等。在一个具体实施例中,一个拍摄周期与帧频有关,例如,对于DSA设备,1秒钟可以连续拍摄50帧透视图像,其中,在此,连续两帧为一个拍摄周期,即,0.04秒。
步骤230、根据所述多个连续拍摄周期的每个拍摄周期中获取的所述第一透视数据和所述第二透视数据,显示所述主体的动态图像。在一些实施例中,步骤230可以由显示模块320执行。
在一些实施例中,针对每个拍摄周期,显示模块320可以基于该拍摄周期中获取的第一透视数据和第二透视数据生成主体的图像(如目标图像)。多个连续拍摄周期的多张目标图像可以组成主体的动态图像。主体的动态图像能够反映主体(如病灶、组织和/或器官等)在多 个连续拍摄周期内的变化情况。
在一些实施例中,主体的动态图像(或目标图像)的类别可以包括软组织图像、骨骼图像和/或至少包括软组织和骨骼的综合图像。在一些实施例中,软组织图像可以是指只显示软组织部分的图像。骨骼图像可以是指只显示骨骼部分的图像。综合图像可以是指显示包括软组织的至少一部分和骨骼的至少一部分的图像。在一些实施例中,综合图像可以由软组织图像和骨骼图像根据特定权重整合而成。在一些实施例中,综合图像中的软组织图像和骨骼图像所占的权重可以根据医护人员的意愿确定,例如,可以根据医护人员通过输入设备输入的信息确定。
在一些实施例中,处理设备140(如显示模块320)可以根据所述多个连续拍摄周期的每个拍摄周期中获取的所述第一透视数据和所述第二透视数据,利用双能减影技术确定所述主体的动态图像并将其显示。具体的,光电吸收效应的强度与被照射物质(例如,主体的软组织、骨骼部分等)的相对原子质量呈正相关,是钙、骨骼、碘造影剂等高密度物质衰减X线光子能量的主要方式。而康普顿散射效应与被照射主体的相对原子质量无关,与主体身体组织、器官的电子密度呈函数关系,主要发生于软组织。双能减影技术可以利用骨骼与软组织对X线光子的能量衰减方式不同,以及不同原子量的物质的光电吸收效应差别确定主体的目标图像(如软组织图像、骨骼图像或综合图像)。这种衰减和吸收的差异在不同能量的X射束中体现较为明显,而X射束的能量对于康普顿散射效应的强度的影响几乎可以忽略不计。显示模块320可以利用双能减影技术对第一透视数据和第二透视数据进行处理,通过选择性地去除或部分去除骨骼或软组织的衰减信息,进而获得软组织图像、骨骼图像或综合图像并将其显示在显示器(例如,终端130)中。在一些实施例中,存储器中的图像处理单元也可以对第一透视数据和第二透视数据进行处理。
在一些实施例中,第一透视数据可以包括第一投影图像,第二透视数据可以包括第二投影图像;处理设备140可以确定第一投影图像中各像素点的灰度Il(x,y)和第二投影图像中各像素点的灰度Ih(x,y)。基于第一投影图像中各像素点的灰度、第二投影图像中各像素点的灰度以及减影参数ω,处理设备140可以依据公式(1)确定目标图像中各像素点的灰度。其中,减影参数ω可以根据主体的不同部位设置。
Ides(x,y)=Il(x,y)/Ih(x,y)ω         (1)
在另一些实施例中,处理设备140还可以通过对第一投影图像与第二投影图像进行叠加处理,确定第一目标图像;再通过对第一目标图像进行反色处理,确定第二目标图像。其中,第一目标图像和第二目标图像可以是不同类别的动态图像,例如,第一目标图像可以是骨骼图像,第二目标图像可以是软组织图像。
在一些实施例中,处理设备140可以获取用户(如医护人员、技师)输入的指令,指令可以反映用户选择需要显示的动态图像类别,然后基于指令确定并显示主体的动态图像。医 护人员可以根据实际需要选择透视设备110生成的动态图像类型,例如,当对骨骼部位进行手术/诊断时,医护人员可以选择生成并显示骨骼图像;当对软组织部位进行手术/诊断时,医护人员可以选择生成并显示软组织图像;当需要同时查看骨骼部位和软组织部位时,医护人员可以选择生成并显示综合图像。在一些实施例中,指令可以包括选择指令、语音指令或者文本指令等。其中,语音指令可以是指医护人员通过输入设备输入的语音信息,例如,“显示骨骼图像”。文本指令可以是指医护人员通过输入设备输入的文本信息,例如,在输入设备上输入“显示软组织图像”。选择指令可以是指输入设备的界面上显示有可供医护人员选择的指令或选择项,例如,选择项一:“显示骨骼图像”、选择项二:“显示软组织图像”、选择项三:“显示综合图像”。
实施例二
当采用医疗设备对患者的组织部位进行诊断或治疗时,很重要的一点是要保证医疗设备可以定位到目标组织部位,以便可以准确的观察到目标组织部位的病理特性,或提高治疗效果。
为达到上述目的,现有技术广泛采用的方式是利用X射线穿过人体,不同组织部位的密度不同,从而对X射线的吸收程度也不同,根据差别吸收的性质可以将密度不同的组织部位区分开来,以反映出人体内部的组织信息。但电离辐射对人体的损伤非常广泛,且难以预测。
在本说明书中包括的多个实施例中,医疗成像设备是以DSA设备(数字减影血管造影设备)为例进行阐述的,本领域普通技术人员可以理解,其他的医疗成像设备也是可以以下几个实施例的。通常地,DSA设备包括C形臂、射线源和探测器,其中,射线源和探测器安装在C形臂的两端,C形臂可以由活动的机架支撑,该机架可以是悬吊式的也可以是落地式的。机架还可以是机器人式的机架。在成像时,成像对象躺在病床上,且该成像对象位于射线源和探测器之间,DSA设备对成像对象进行透视来获得多帧人体的内部图像,进而协助医生进行操作,比如手术和插导丝等操作。
下面结合附图和实施例对本申请作进一步的详细说明。可以理解的是,此处所描述的具体实施例仅仅用于解释本申请,而非对本申请的限定。另外还需要说明的是,为了便于描述,附图中仅示出了与本申请相关的部分而非全部结构。
图3是本申请实施例一提供的一种医疗成像设备的成像定位方法的流程图,本实施例可适用于对目标部位进行定位的情况,该方法可以成像定位方法来执行,该装置可采用软件和/或硬件的方式实现。
具体包括如下步骤:
S310、获取与成像对象对应的虚拟人体模型,并获取与用户操作指令对应的第一位置信息。
在一个实施例中,可选的,获取与成像对象对应的虚拟人体模型,包括:根据获取到的 与成像对象对应的身高数据,选取与身高数据对应的虚拟人体模型;其中,虚拟人体模型包括人体体型模型和人体内部模型。
其中,具体的,虚拟人体模型由两部分模型构成,一部分是与身高数据相关的人体体型模型,另一部分是与人体体型模型对应的人体内部模型。其中,具体的,虚拟人体模型可以二维模型,也可以是三维模型。
其中,获取与成像对象对应的身高数据,可以是接收用户输入的与成像对象对应的身高数据,也可以是获取成像对象的历史医学图像,基于历史医学图像中各关键点之间的图像尺寸确定身高数据,其中,示例性的,关键点可以包括边界点或关节点。
在一个实施例中,可选的,将身高数据与已存储的至少一个身高区间数据进行匹配,将匹配成功的身高区间数据对应的虚拟人体模型作为与身高数据对应的虚拟人体模型。其中,示例性的,身高区间数据可以为1.0-1.1m、1.1-1.2m、1.2-1.3m、1.3-1.4m、1.4-1.5m、1.5-1.6m、1.6-1.7m、1.7-1.8m和1.9-2.0m等等,每个身高区间数据均对应一个人体体型模型。示例性的,将身高区间数据对应的均值身高作为该身高区间数据对应的人体体型模型的身高。举例而言,1.0-1.1m身高区间数据对应的人体体型模型的身高可以为1.05m。
可以理解的是,该身高区间数据的跨度可以更小或更大,相应地,参考人体模型的数量更多或者更少。当然,还可以在上述身高区间数据的基础上增加2.0-2.1m的身高区间数据,或删除1.0-1.1m的身高区间数据。
在一个实施例中,可选的,人体内部模型包括血管模型、器官模型、骨骼模型和肌肉模型中至少一种。在一个实施例中,可选的,基于身高区间数据,将人体体型模型和人体内部模型对应存储。示例性的,对于身高区间数据为1.0-1.1m、1.1-1.2m、1.2-1.3m、1.3-1.4m、1.4-1.5m、1.5-1.6m、1.6-1.7m、1.7-1.8m和1.9-2.0m,每个身高区间数据均对应一个人体内部模型。
在一个实施例中,可选的,当身高数据与身高区间数据不匹配时,基于与身高数据相邻的两个身高区间数据分别对应的虚拟人体模型,确定与身高数据匹配的虚拟人体模型。举例而言,身高数据为1.45m,但已存储的身高区间数据为1.0-1.1m、1.1-1.2m、1.2-1.3m、1.3-1.4m、1.5-1.6m、1.6-1.7m、1.7-1.8m和1.9-2.0m,则基于1.3-1.4m对应的虚拟人体模型和1.5-1.6m对应的虚拟人体模型估算出与1.4-1.5m身高区间数据对应的虚拟人体模型,并将该估算出的虚拟人体模型作为与身高数据匹配的虚拟人体模型。其中,示例性的,估算的方法可以是差值或平均值等方法,此处对具体的估算方法并不作限定。这样设置的好处在于,尽可能提高获取的虚拟人体模型与成像对象之间的匹配度,进而降低后续人体内部图像与真实组织图像之间的误差。
在一个实施例中,可选的,第一位置信息包括虚拟人体模型对应的第一模型信息或医疗成像设备对应的第一设备信息,其中,第一模型信息与第一设备信息之间存在关联关系。具 体的,基于二者之间的关联关系,当第一模型信息改变时第一设备信息也改变;反之,当第一设备信息改变时第一模型信息也改变。
在一个实施例中,可选的,关联关系包括位置关联关系该方法还包括:将医疗成像设备与成像对象的相对位置关系转换为医疗成像设备与虚拟人体模型的位置关联关系;其中,位置关联关系用于表征第一模型信息与第一设备信息之间的位置参数的关系。
其中,示例性的,医疗成像设备通过预设基准点或预设基准线与成像对象匹配,建立二者之间的相对位置关系。其中,预设基准点或预设基准线可以是治疗床上预设的基准点或基准线。示例性的,预设基准线可以是治疗床上与成像对象的头部顶点对齐的基准线。
其中,具体的,第一模型信息包括与虚拟人体模型对应的模型位置参数,第一设备信息包括与医疗成像设备对应的设备位置参数。示例性的,模型位置参数可以是人体内部图像对应的中心点位置和模型角度,设备位置参数可以是射线源的中心位置和射线源与探测器之间的轴线相对于治疗床的角度。
在一个实施例中,可选的,关联关系还包括视野关联关系,该方法还包括:获取第一模型信息与第一设备信息之间的视野关联关系;其中,视野关联关系用于表征第一模型信息与第一设备信息之间的视野参数的关系。
其中,具体的,第一模型信息包括与虚拟人体模型对应的模型视野参数,第一设备信息包括与医疗成像设备对应的设备视野参数。示例性的,模型视野参数可以是人体内部图像对应的图像尺寸,设备视野参数可以是放大率、源像距和源物距中至少一种。具体的,可预先建立映射列表用于描述第一模型信息与第一设备信息之间的视野关联关系。
S320、基于第一位置信息,确定虚拟人体模型中与第一位置信息对应的人体内部图像,将人体内部图像进行显示。
其中,示例性的,人体内部图像是虚拟人体模型与第一位置信息对应的虚拟图像。具体的,当虚拟人体模型为三维模型时,人体内部图像可以为三维图像,也可以为与第一位置信息中的角度信息对应的二维投影图像。
在一个实施例中,可选的,人体内部图像包含与虚拟人体模型中至少一种人体内部模型对应的模型图像,其中,人体内部模型包括血管模型、器官模型、骨骼模型和肌肉模型中至少一种。在一个实施例中,用户可在交互界面上选择人体内部图像对应的人体内部模型。如用户在对血管进行成像之前,选择血管模型,则人体内部图像只显示出血管模型的模型图像。在另一实施例中,用户可在交互界面上选择人体内部模型的透明度,如血管模型的透明度为0%,其他模型的透明度为100%,则人体内部图像也只显示出血管模型的模型图像。这样设置的好处在于,尽可能降低除目标模型以外的模型对用户造成视觉干扰,从而提高后续人体内部图像的定位准确度。
S330、如果第一位置信息对应的人体内部图像与目标拍摄位置对应,则根据第一位置信 息确定医疗成像设备对应的目标成像位置。
其中,具体的,目标拍摄位置可以是在进行成像定位之前,用户根据成像计划预先设置的拍摄位置,示例性的,目标拍摄位置可以是胸部、头部和腹部等身体位置,也可以胃部、小肠、食道和咽喉等器官位置。此处对目标拍摄位置的具体设置内容不作限定。示例性的,如果人体内部图像包含与目标拍摄位置对应的图像,则认为人体内部图像与目标拍摄位置对应。
S340、如果第一位置信息对应的人体内部图像与目标拍摄位置不对应,则继续获取不同于第一位置信息的第二位置信息,并根据第二位置信息对应的人体内部图像确定与医疗成像设备对应的目标成像位置。
其中,具体的,当第一位置信息为第一模型信息时,第二位置信息为第二模型信息;当第一位置信息为第一设备信息时,第二位置信息为第二设备信息。本实施例的技术方案,通过基于虚拟人体模型确定与用户输入的第一位置信息对应的人体内部图像,并通过判断人体内部图像与目标拍摄位置是否对应,得到医疗成像设备对应的目标成像位置,解决了成像操作过程中对人体过多辐射,使得用户在成像定位的过程中可任意设置参数,且不会对人体造成任何损伤,进而也保证了定位的准确度。
图4是本申请提供的一种医疗成像设备的成像定位方法的流程图,本实施例的技术方案是上述实施例的基础上的进一步细化。可选的,所述获取与用户操作指令对应的第一位置信息,包括:当所述第一位置信息为第一模型信息时,将所述虚拟人体模型显示在交互界面上,并在所述虚拟人体模型上显示与用户操作指令对应的第一模型信息。
本实施例的具体实施步骤包括:
S410、获取与成像对象对应的虚拟人体模型。
S420、将虚拟人体模型显示在交互界面上,并在虚拟人体模型上显示与用户操作指令对应的第一模型信息。
在一个实施例中,可选的,第一模型信息包括图形化标记。
其中,示例性的,图形化标记的形状可以是方形、圆形、菱形或任意形状,此处对图形化标记的具体形状不作限定。在一个实施例中,可选的,图形化标记基于用户操作指令执行选择、移动、缩小和放大中至少一项操作。其中,示例性的,当接收到用户输入的选择操作指令时,显示与选择操作指令对应的图形化标记。其中,具体的,当虚拟人体模型为三维模型时,图形化标记不仅可以在虚拟人体模型所在XOY平面中移动,还可以在虚拟人体模型所在XOZ平面和/或YOZ平面中移动,另外,移动包括平动和转动。具体而言,通过移动图形化标记可以确定选择虚拟人体模型任一层面和角度上的模型图像。
S430、基于第一模型信息,确定虚拟人体模型中与第一模型信息对应的人体内部图像,将人体内部图像进行显示。
在一个实施例中,当第一模型信息为图形化标记时,根据图形化标记的尺寸和图形化标记相对于虚拟人体模型的位置,确定与图形化标记对应的人体内部图像。其中,具体的,图形化标记相对于虚拟人体模型的位置包括图形化标记相对于虚拟人体模型的角度和位置坐标。
图5是本申请实施例提供的一种交互界面的示意图。如图5所示,左边的图像包含虚拟人体模型以及虚拟人体模型上的图形化标记(黑色方框),右边的图像表示虚拟人体模型中与图形化标记对应的人体内部图像。
S440、如果第一模型信息对应的人体内部图像与目标拍摄位置对应,则根据第一模型信息确定医疗成像设备对应的目标成像位置。
S450、如果第一模型信息对应的人体内部图像与目标拍摄位置不对应,则继续获取不同于第一模型信息的第二模型信息,并根据第二模型信息对应的人体内部图像确定与医疗成像设备对应的目标成像位置。
在一个实施例中,可选的,在继续获取不同于所述第一位置信息的第二位置信息之前,该方法还包括:当第一位置信息为第一模型信息时,根据第一模型信息和关联关系,确定医疗成像设备对应的第一成像位置,并控制医疗成像设备移动到第一成像位置。本实施例实现的技术效果是:在医疗成像设备的成像过程中,用户通过输入至少一种第一模型信息查看人体内部图像,在查看的过程中,医疗成像设备对应的成像组件随第一模型信息的改变而移动。当第二模型信息对应的人体内部图像与目标拍摄位置对应时,医疗成像设备的成像组件的当前成像位置即为目标成像位置。
在另一个实施例中,可选的,根据第一位置信息确定医疗成像设备对应的目标成像位置,包括:当所述第一位置信息为第一模型信息时,根据第一模型信息和所述关联关系,确定所述医疗成像设备对应的目标成像位置,并控制所述医疗成像设备移动到目标成像位置。本实施例实现的技术效果是:在医疗成像设备的成像过程中,用户通过输入至少一种第一模型信息查看人体内部图像,在查看过程中,医疗成像设备的成像组件可以不随第一模型信息的改变而移动。当接收到用户输入的定位指令时,控制医疗成像设备基于目标设备信息从初始位置移动目标成像位置。
在本实施例中,可选的,在控制医疗成像设备基于目标设备信息移动到目标成像位置之后,还包括:控制所述医疗成像设备中的成像组件基于视野参数执行成像操作;其中,所述视野参数包括源像距、源物距和放大率中至少一种。其中,具体的,源像距表示射线源与探测器之间的距离,源物距表示射线源与成像对象之间的距离,放大率表示成像图像的放大倍数。
本实施例的技术方案,通过在交互界面上显示虚拟人体模型和图形化标记,解决了用户直接输入第一模型信息复杂的问题,用户可以直观的观察虚拟人体模型并通过改变图形化标记与虚拟人体模型的位置关系达到在任一相对位置下预览人体内部结构的目的而不必对实际 人体进行透视。
图6是本申请提供的一种医疗成像设备的成像定位方法的流程图,本实施例的技术方案是上述实施例的基础上的进一步细化。可选的,所述基于所述第一位置信息,确定所述虚拟人体模型中与所述第一位置信息对应的人体内部图像,包括:当所述第一位置信息为第一设备信息时,根据所述第一设备信息和所述关联关系,确定虚拟人体模型对应的第一模型信息,并基于所述第一模型信息确定人体内部图像。
本实施例的具体实施步骤包括:
S610、获取与成像对象对应的虚拟人体模型,并获取与用户操作指令对应的第一设备信息。
其中,示例性的,用户操作指令可以是参数输入操作指令,接收用户输入的第一设备信息,在一种例子中,将用户输入的医疗成像设备的设备参数信息作为第一设备信息;在另一种例子中,因为医疗成像设备的系统坐标是事先被标定的,因此,医疗成像设备的各种组件在每个位置下的坐标可以实时更新且被系统知道的。这样,操作人员,诸如医生,将医疗成像设备从第一位置移动到第二位置,那么,无论在第一位置还是第二位置,系统都能够知道该医疗成像设备的各组件的系统坐标。
S620、根据第一设备信息和关联关系,确定虚拟人体模型对应的第一模型信息,并基于第一模型信息确定人体内部图像。
其中,具体的,根据关联关系,将用户输入的第一设备信息转换为与虚拟人体模型对应的第一模型信息,确定与第一模型信息对应的人体内部图像。
在一个实施例中,可选的,关联关系包括第一设备信息中的视野参数,诸如源像距,与第一模型信息中的图像深度之间的匹配关系。其中,源像距表示射线源与探测器之间的距离。图7是本申请实施例提供的一种虚拟人体模型的成像场景的示意图。从射线源发出的三条实线和一条虚线表示射线源发出的射束,即射线源发出的锥形射束穿过虚拟人体模型。图7示出了射束在传播过程中,在虚拟人体模型的不同深度层面对应的视野范围。从图7中可以直观的看出,在射束的传播路径上,不同深度层面对应的视野范围不同。
在一个实施例中,可选的,医疗成像设备包括数字X射线摄影设备、C形臂X射线设备、乳腺机、计算机断层射线摄影设备、磁共振设备、正电子发射断层成像PET设备、正电子发射断层成像及计算机断层成像PET-CT设备、正电子发射断层成像及磁共振成像PET-MR设备或放射治疗成像RT设备。
S630、如果第一设备信息对应的人体内部图像与目标拍摄位置对应,则根据第一位置信息确定医疗成像设备对应的目标成像位置。
在一个实施例中,可选的,将第一设备信息中的成像位置作为与医疗成像设备对应的目标成像位置。
在上述实施例的基础上,可选的,该方法还包括:将虚拟人体模型显示在交互界面上,并基于第一设备信息在虚拟人体模型上显示图形化标记。其中,具体的,根据第一设备信息和第二匹配信息,确定第一模型信息,根据第一模型信息显示图形化标记。这样设置的好处在于,当用户输入第一设备信息后,还可通过对图形化标记执行选择、移动、缩小或放大等操作,继续输入第一模型信息。第一设备信息和第二模型信息交替输入,实现从不同维度确定人体内部图像,进一步提高定位的准确度。
S640、如果第一设备信息对应的人体内部图像与目标拍摄位置不对应,则继续获取不同于第一设备信息的第二设备信息,并根据第二设备信息对应的人体内部图像确定与医疗成像设备对应的目标成像位置。
本实施例的技术方案,通过接收用户输入的第一设备信息,并根据第一设备信息与第一模型信息之间的关联关系显示人体内部图像,从而可以快速定位。
在一个实施例中,上述实施例一中所公开的内容可以应于实施例二中。例如,基于实施例二中公开的内容确定成像对象(其中,成像对象即实施例一中的主体)的目标成像位置后,在一个拍摄周期内对成像对象进行拍摄,在拍摄过程中获取射线源在第一能量下照射成像对象的第一透视数据,以及获取射线源在不同于所述第一能量的第二能量下照射成像对象的第二透视数据。并在多个连续的拍摄周期对成像对象的目标成像位置进行拍摄。之后,根据所述多个连续拍摄周期的每个拍摄周期中获取的所述第一透视数据和所述第二透视数据,显示成像对象的动态图像。
实施例三
血管造影技术是一种辅助检查技术,利用X射线不能穿透造影剂的原理观察血管的病变情况,被普遍应用于临床对各种疾病的诊断和治疗中。同时,血管造影又是一种微创技术,它需要在被测血管中插入导管,从而将造影剂注入到被测血管中,再采用成像设备对被测血管进行成像。
而在进行对成像结果进行查看时,医生为了得到全面的血管信息,需要不断对图像进行放大或缩小操作,从而在显示屏上不断对不同视野的图像进行切换。
基于上述现有的技术方案,医生在进行血管造影检查时,如果需要对不同视野图像进行比较,就需要不断对图像进行切换。由于用户需要不断对显示设备上的血管图像进行切换,每次切换都要重新对成像对象进行扫描,进而降低了血管造影的诊断效率,同时反复切换也提高了血管造影在诊断结果的误差风险。
图8是本申请实施例提供的一种图像显示方法的流程图,本实施例可适用于使用血管造影设备进行成像,并将成像结果进行显示的情况,该方法可以由图像显示装置来执行,该装置可采用软件和/或硬件的方式实现,该装置可以配置于血管造影设备中。具体包括如下步骤:
S810、获取血管造影设备基于第一视野参数采集到的成像对象的第一血管图像,并显示 第一血管图像。
血管造影技术是一种将X射线序列图像的血管可视化的技术,通过向血管内注射造影剂,由于X射线不能穿透造影剂,因此,血管造影使用X射线对被测部位进行扫描成像,造影剂使得被测部位中的血管与其他组织形成物质密度差异,从而扫描得到的图像可以准确的反映血管病变的部位和程度。
其中,示例性的,第一视野参数包括但不限于第一视野范围、第一放大率、第一视野中心位置和第一成像模式中至少一种。以成像对象为腹部为例,具体的,第一视野范围可用于描述第一血管图像中与成像对象对应的区域尺寸。示例性的,假设腹部对应的整体区域尺寸为100*100,则第一视野范围可以是100*100,也可以是30*50,当第一视野范围为30*50时,第一视野范围包含腹部视野范围中的部分视野范围。具体的,第一放大率可用于描述第一血管图像对成像对象的放大倍率,类似将血管在影像上放大100倍等。其中,第一放大率既可以是相对标准放大率而言的相对放大率,也可以是相对图像对血管的实际放大率。若第一放大率是相对于标准放大率而言的相对放大率,当第一放大率大于标准放大率时,第一血管图像相对于标准放大率对应的图像为被放大的图像,当第一放大率小于标准放大率时,第一血管图像相对于标准放大率对应的图像为被缩小的图像。
具体的,第一视野中心位置可用于描述第一血管图像的图像中心位置,从血管造影设备角度而言,第一视野中心位置可用于描述探测器上与射线源的射束中心位置对应的位置。具体的,第一成像模式包括透视模式或曝光模式。其中,透视模式是一种辐射剂量大,图像分辨率较低的成像模式,曝光模式是一种辐射剂量小,图像分辨率较高的成像模式。具体的,在实际的应用场景下,用户可以通过使用血管造影设备上的Zoom功能,实现对成像模式的切换。
在一个实施例中,显示第一血管图像,包括:在显示设备的预设显示区域显示第一血管图像,或在显示设备的整个显示界面上显示第一血管图像。
S820、基于接收到的参数调整指令以及第一视野参数确定第二视野参数。
在本实施例中,参数调整指令中包含缩放调整比例。其中,具体的,当缩放调整比例大于一时,第二视野参数中的第二放大率大于第一放大率或第二视野范围小于第一视野范围。当缩放调整比例小于一时,第二视野参数中的第二放大率小于第一放大率或第二视野范围大于第一视野范围。从血管图像角度来说,当缩放调整比例大于一时,对第一血管图像执行放大操作。当缩放调整比例小于一时,对第一血管图像执行缩小操作。
当参数调整指令中仅包含缩放调整比例时,如果缩放调整比例为1,此时第一视野参数即为第二视野参数,则后续得到的第二血管图像与第一血管图像完全相同。在一个实施例中,可选的,在基于接收到的参数调整指令以及第一视野参数确定第二视野参数之后,还包括:如果第一视野参数与第二视野参数相同,则生成提示信息。其中,具体的,提示信息可用于 提示用户与参数调整指令对应的血管图像已存在。进一步的,还可以提示用户与当前的视野调整参数对应的血管图像的显示位置。示例性的,提示的方式可以是文字提示、语音提示和提示灯中的至少一种方式。其中,可以为至少一个显示设备设置对应的提示灯,如果显示设备上当前显示的血管图像与当前的参数调整指令对应,则提示灯闪烁。这样设置的好处在于,避免显示视野参数相同的血管图像,造成显示位置的浪费,降低显示设备的利用率。
在一个实施例中,可选的,基于接收到的参数调整指令以及第一视野参数确定第二视野参数,包括:基于缩放调整比例和第一视野参数中的第一放大率,确定第二视野参数中的第二放大率。其中,示例性的,假设第一视野参数中的第一放大率相对于预设的标准放大率为2倍,缩放调整比例为3倍,则第二放大率为6倍。
在一个实施例中,可选的,基于接收到的参数调整指令以及第一视野参数确定第二视野参数,包括:基于缩放调整比例和第一视野参数中的第一视野范围,确定第二视野参数中的第二视野范围。其中,示例性的假设第一视野范围为10*10,缩放调整比例为2倍,则第二视野范围为5*5。
在上述实施例的基础上,当参数调整指令仅包含缩放调整比例时,则基于第一视野参数中的第一放大率、第一视野范围以及缩放调整比例计算得到第二放大率和第二视野范围。示例性的,第二视野中心位置可以默认为第一视野参数中的第一视野中心位置,第二成像模式默认为第一视野参数中的第一成像模式。
在上述实施例的基础上,可选的,参数调整指令还包括视野调整范围、视野调整中心位置和成像调整模式中至少一种。具体的,用户可自定义第二视野参数中的第二视野范围、第二视野中心位置和第二成像模式,具体的,将参数调整指令中的视野调整范围、视野调整中心位置和成像调整模式分别作为第二视野范围、第二视野中心位置和第二成像模式。参数调整指令可基于用户输入的参数数值生成,或获取用户基于显示的第一血管图像输入的选取操作生成。示例性的,用户可以通过使用血管造影设备上的Zoom功能实现对缩放调整比例的输入。用户还可以基于第一血管图像输入图像选择框,根据输入的图像选择框的尺寸生成视野调整范围。
S830、基于第二视野参数确定第二血管图像,并将第二血管图像与第一血管图像进行同步显示。
在一个实施例中,可选的,基于第二视野参数确定第二血管图像,包括:获取血管造影设备基于第二视野参数采集到的第二血管图像。具体的,控制血管造影设备基于第二视野参数采集得到第二血管图像。
需要说明的是,本申请实施例以两个血管图像同步显示进行示例性解释说明,在实际应用中,根据本申请实施例提供的图像显示方法可以得到更多不同视野参数对应的血管图像,并将各血管图像进行同步显示。次数对同步显示的血管图像的数量不作限定。
本实施例的技术方案,通过在显示第一血管图像的同时,基于接收到的参数调整指令确定第二血管图像,并将第二血管图像与第一血管图像同步显示,解决了对不同视野的扫描图像进行反复切换显示的问题,降低了重复成像的次数和辐射量,提高了血管造影设备的使用寿命,进而也提高了血管造影的诊断效率,降低了反复切换带来的血管造影的诊断结果的误差。
图9是本申请实施例提供的一种图像显示方法的流程图,本实施例是在上述实施例的基础上的进一步细化,可选的,所述将所述第二血管图像与所述第一血管图像进行同步显示,包括:当所述第一血管图像对应的第一视野参数发生变化时,获取血管造影设备基于变化后的第一视野参数采集到的更新第一血管图像;基于变化后的第一视野参数和所述参数调整指令确定更新第二视野参数,并将基于所述更新第二视野参数确定的更新第二血管图像与所述更新第一血管图像进行同步显示。
本实施例的具体实施步骤包括:
S910、获取血管造影设备基于第一视野参数采集到的成像对象的第一血管图像,并显示第一血管图像。
在一种实现方式中,结合实施例一,步骤S910具体为:在一个拍摄周期内利用血管造影设备对成像对象进行拍摄,获取血管造影设备基于第一视野参数在第一能量下照射成像对象的第一透视数据,以及在拍摄过程中,以及获取射线源在不同于第一能量的第二能量下照射成像对象的第二透视数据。之后,根据第一透视数据以及第二透视数据确定成像对象的第一血管图像,并显示第一血管图像。
在另一种实现方式中,结合实施例二,步骤S910具体为:获取与成像对象对应的虚拟人体模型。将虚拟人体模型显示在交互界面上,并在虚拟人体模型上显示与用户操作指令对应的第一模型信息。基于第一模型信息,确定虚拟人体模型中与第一模型信息对应的人体内部图像,将人体内部图像进行显示。如果第一模型信息对应的人体内部图像与目标拍摄第一血管位置对应,则根据第一模型信息确定医疗成像设备对应的第一血管位置。如果第一模型信息对应的人体内部图像与目标拍摄第一血管位置位置不对应,则继续获取不同于第一模型信息的第二模型信息,并根据第二模型信息对应的人体内部图像确定与医疗成像设备对应的第一血管位置。在确定第一血管位置之后,获取血管造影设备基于第一视野参数采集到的成像对象的第一血管图像,并显示第一血管图像。
S920、基于接收到的参数调整指令以及第一视野参数确定第二视野参数,并基于第二视野参数确定第二血管图像。
在一种实现方式中,结合实施例一,步骤S920具体为:基于接收到的参数调整质量以及第一舍业参数确定第二视野参数,并在一个拍摄周期内利用血管造影设备对成像对象进行拍摄,获取血管造影设备基于第二视野参数在第一能量下照射成像对象的第三透视数据,以 及在拍摄过程中,以及获取射线源在不同于第一能量的第二能量下照射成像对象的第四透视数据。之后,根据第三透视数据以及第四透视数据确定成像对象的第二血管图像,并显示第二血管图像。
在另一种实现方式中,结合实施例二,步骤S920具体为:获取与成像对象对应的虚拟人体模型。将虚拟人体模型显示在交互界面上,并在虚拟人体模型上显示与用户操作指令对应的第一模型信息。基于第一模型信息,确定虚拟人体模型中与第一模型信息对应的人体内部图像,将人体内部图像进行显示。如果第一模型信息对应的人体内部图像与目标拍摄第二血管位置对应,则根据第一模型信息确定医疗成像设备对应的第二血管位置。如果第一模型信息对应的人体内部图像与目标拍摄第二血管位置不对应,则继续获取不同于第一模型信息的第二模型信息,并根据第二模型信息对应的人体内部图像确定与医疗成像设备对应的第二血管位置。在确定第二血管位置之后,获取血管造影设备基于第一视野参数采集到的成像对象的第一血管图像,并显示第一血管图像。
在一个实施例中,可选的,成像对象包括血管和插入到血管中的导管,基于接收到的参数调整指令以及第一视野参数确定第二视野参数,还包括:当缩放调整比例大于一时,获取第一血管图像中与导管对应的导管顶端图像,将导管顶端图像对应的视野中心位置和视野范围分别作为第二视野参数中的第二视野中心位置和第二视野范围。
血管造影技术是一种通过导管介入的方式向靶血管内注射造影剂的技术。具体的,在血管造影技术中,将导管插入到血管中,以将造影剂注射到导管后,通过导管将造影剂输送到指定的血管位置。在导管介入的过程中,需要对血管进行实时成像,以便观察导管在血管中的位置。在本实施例中,第一血管图像中包含导管图像。其中,具体的,导管图像包括导管顶端图像,即对导管的顶端进行成像后得到的图像。
其中,获取第一血管图像中与导管对应的导管顶端图像,具体的,基于预设划分规则,对第一血管图像进行划分得到导管顶端图像,示例性的,预设划分规则包括预设图像尺寸和预设图像形状等。举例而言,预设图像形状可以是正方形、长方形、圆形或不规则形状。其中,预设图像尺寸可以根据预设图像形状确定,示例性的,当预设图像形状为正方形时,预设图像尺寸可以是10mm*10mm,当预设图像形状为圆形时,预设图像尺寸可以是半径为5mm。当然,预设图像尺寸也可以是指导管顶端图像的图像面积。
这样设置的好处在于,通过自动识别第一血管图像中的导管顶端图像,避免了人为手动选取感兴趣区域图像的步骤。当第一血管图像改变时,也能实现实时跟踪识别改变后的第一血管图像中的导管顶端图像,从而大大提高后续血管造影过程中的血管造影的诊断效率。
在上述实施例的基础上,可选的,基于第二视野参数确定第二血管图像,包括:获取血管造影设备基于第二视野参数采集到的第二血管图像;或,当缩放调整比例大于一时,基于第二视野参数中的第二视野中心位置和第二视野范围确定第一血管图像中的参考血管图像, 并基于第二视野参数中的第二放大率和参考血管图像确定第二血管图像。
其中,不论缩放调整比例大于一还是小于一,均可控制血管造影设备基于第二视野参数采集得到第二血管图像。尤其是当缩放调整比例小于一时,说明对第一血管图像执行缩小操作,此时通常第二视野范围大于第一视野范围,即第一血管图像中并不包含第二血管图像所需的所有图像信息,此时需要重新控制血管造影设备基于第二视野参数采集得到第二血管图像。在另一个实施例中,当缩放调整比例大于一时,说明对第一血管图像执行放大操作,此时第一血管图像通常包含第二血管图像所需的所有图像信息。当然可能也存在第一血管图像通常不包含第二血管图像所需的所有图像信息的情况,如第二视野中心位置为第一血管图像的边界点且第二视野范围不为0时,此时也可采用控制血管造影设备基于第二视野参数采集得到第二血管图像的方法。本实施例以第一血管图像包含第二血管图像所需的所有图像信息为例,参考血管图像包含第二血管图像所需的所有图像信息,且参考血管图像对应的放大率为第一放大率,所以还需基于第二视野参数中的第二放大率对参考血管图像执行放大操作,得到第二显示图像。其中,示例性的,执行放大操作过程中的算法包括但不限于最近邻插值算法、双线性插值算法和高阶插值算法中至少一种。
这样设置的好处在于,通过直接在第一血管图像中截取参考血管图像,并基于参考血管图像和第二视野参数直接得到第二血管图像,进一步降低了重复成像的次数和辐射量,提高了血管造影设备的使用寿命。
S930、将第二血管图像与第一血管图像进行同步显示。
在一个实施例中,可选的,将第二血管图像与第一血管图像进行同步显示,包括:将第二血管图像与第一血管图像进行同时显示。这样设置的好处在于,避免用户重复切换显示第一血管图像和第二血管图像。
S940、当第一血管图像对应的第一视野参数发生变化时,获取血管造影设备基于变化后的第一视野参数采集到的更新第一血管图像。
S950、基于变化后的第一视野参数和参数调整指令确定更新第二视野参数。
其中,在参数调整指令不变的情况下,当第一视野参数发生变化时,第一血管图像也发生变化,同时第二视野参数会跟随第一视野参数发生变化,得到更新第二视野参数。
S960、将基于更新第二视野参数确定的更新第二血管图像与更新第一血管图像进行同步显示。
在上述实施例的基础上,可选的,将第二血管图像与第一血管图像进行同步显示,包括:在第一血管图像对应的显示设备上,将第二血管图像在不同于第一血管图像的显示区域进行显示;或,将第二血管图像在不同于第一血管图像的显示设备上进行显示。其中,具体的,在一个实施例中,同一显示设备的显示界面包含至少两个显示图像的区域,其中一个显示区域用于显示第一显示图像,另一个显示区域用于显示第二显示图像。在另一个实施例中,血 管造影设备包括至少两个显示设备,其中一个显示设备用于显示第一显示图像,另一个显示设备用于显示第二显示图像。
当然,当第一视野参数和第二视野参数均发生变化时,得到的更新第一血管图像和更新第二血管图像也可在显示设备的不同显示区域进行显示,或者,也可在不同显示显示设备上进行显示。
本实施例的技术方案,通过基于变化后的第一视野参数和参数调整指令确定更新第二视野参数,并将基于更新第二视野参数确定的更新第二血管图像与更新第一血管图像进行同步显示,解决了用户在第一血管图像改变时重新确定第二血管图像的步骤,提高了血管造影的诊断效率,同时也避免了重复确认过程中人为因素带来的定位误差。
实施例四
乳腺癌在全球范围内都是严重威胁女性健康的重要疾病。乳腺X射线摄影目前被公认为乳腺癌的首选检查方式。随着影像设备不断更新,数字化乳腺断层合成技术,又称数字乳腺断层摄影(Digital Breast Tomosynthesis,简称DBT)出现,数字乳腺断层摄影是一种三维成像技术,可以在短暂的扫描过程中,在不同角度获得乳房的投影数据,然后将这些独立的图像重建成包含一系列高分辨率的乳腺三维断层图像。这些断层图像单独显示或以连续播放的形式动态显示。每个断层图像显示乳房的每个断层的结构,整个乳腺三维断层图像表示重建后的乳房。
在现有技术中,通常在三维断层图像上采用一些投影方法(譬如最大密度投影、平均值投影等)获取一张与二维数字乳腺摄影图像类似的2D图像。但通过DBT成像系统扫描后,由于扫描角度小,Z轴分辨率较差,只能在平行于探测器的方向上获得一张较好的图像,而不能获得其他方向上的信息,这不利于用户观察病灶的整体分布和病灶定位。
图10为本申请实施例提供的一种容积重建图像生成方法的流程示意图以解决上述问题,,本实施例可适用于确定在期望重建方向上的目标容积重建图像的情况,好处是可以得到期望重建方向上以及与期望重建方向垂直的方向上的目标容积重建图像,即目标容积重建图像的图像信息丰富且全面,利于用户对目标容积重建图像上的感兴趣区进行快速定位,该方法可以有容积重建图像生成装置来执行,其中该装置可由软件和/或硬件实现,并一般集成在终端或控制设备中。具体参见图10所示,该方法可以包括如下步骤:
S1010,获取各扫描角度下的投影数据。
可选地,可以通过图像采集设备控制球管在不同的角度下向扫描对象发射X射线,所述不同的角度即为所述扫描角度。所述投影数据可以为图像采集设备的探测器接收到的数据。示例性地,本实施例中的图像采集设备可以为数字乳腺断层摄影(Digital Breast Tomosynthesis,简称DBT)设备,数字乳腺断层摄影设备的球管向扫描对象发射X射线,通过探测器接收透 过扫描对象的X射线并将透过扫描对象的X射线转化为数字化的投影数据,将各扫描角度下的投影数据发送给控制设备。可选地,数字乳腺断层摄影设备的球管的扫描角度的范围为15度。
在一种实现方式中,针对各扫描角度,确定各扫描角度的第一视野参数,以及根据各扫描角度的参数调整指令和各扫描角度的第一视野参数确定各扫描角度的第二视野参数。之后,基于各扫描角度的第一视野参数采集各扫描角度的第一投影图像以及基于各扫描角度的第二视野参数采集第二投影图像,并根据各扫描角度的第一投影图像和第二投影图像,确定各扫描角度下的投影数据。
具体的,根据各扫描角度的第一投影图像和第二投影图像,具体可以是对某一扫描角度的第一投影图像和第二投影图像进行加权操作,确定该扫描角度下的投影数据。
S1020,基于初始容积坐标系和期望重建方向,构建目标容积坐标系。
其中,所述期望重建方向可以是初始容积坐标系中任意方向的向量,可以随着扫描角度的不同而变化。本实施例利用容积重建技术对投影数据进行容积重建,构建出投影数据的三维结构模型,在构建出的三维结构模型中,获取外部输入的任意方向的向量并将该向量作为期望重建方向,以基于期望重建方向和初始容积坐标系对投影数据进行容积重建,便于将肉眼不可见的投影信息以断层图像的形式直观的展示。
其中,所述容积重建技术可以理解为体绘制技术,具体方法是为三维结构模型中的每一个体素指定一个阻光度,并考虑每一个体素对光线的透射、反射和反射作用。光线的透射取决于体素的不透明度;光线的反射取决于体素的物质度,物质度越大,其反射光越强;光线的反射取决于体素所在的面与入射光的夹角关系。体绘制的步骤原则上分为投射、消隐、渲染和合成四个步骤。体绘制算法按处理数据域的不同可分为空间域方法和变换域方法,所述空间域方法是直接对原始的空间数据进行处理显示,所述变换域方法是将体数据变换到变换域,然后再进行处理显示。
其中,所述初始容积坐标系的确定方法包括:对所述投影数据进行容积重建,得到初始容积重建图像,基于所述初始容积重建图像构建所述初始容积坐标系。可选地,初始容积坐标系的第一坐标原点为初始容积重建图像所对应重建容积的第一纵轴所属边长的任一位置点;第一横轴在所述第一纵轴所属的平面内,以所述第一坐标原点为起点,且垂直于所述第一纵轴;第一竖轴以所述第一坐标原点为起点,垂直于所述第一横轴和所述第一纵轴所在平面。可选地,所述第一坐标原点可以为重建容积(volume)的第一纵轴所属边长的顶点、中点或者三分之一点等,本实施例将第一纵轴所属边长的中点作为第一坐标原点。
如图11所示为初始容积坐标系的定义的示意图,图11中的重建容积(volume)可以为上述三维结构模型,控制设备获取到投影数据后,利用容积重建技术对所述投影数据进行容积重建,得到初始容积重建图像和所述重建容积(volume),并基于初始容积重建图像构建初 始容积坐标系。具体地,图11中初始容积坐标系的第一坐标原点O为重建容积(volume)的第一纵轴Y所属边长(VolumeY)的中点,所述第一横轴X在所述第一纵轴Y所属边长的底面上,与边长(VolumeX)平行,第一竖轴Z为以所述中点O为起点,垂直于所述第一横轴X和所述第一纵轴Y所在底面,且与边长(VolumeZ)平行,基于确定的第一坐标原点O、第一横轴X、第一纵轴Y以及第一竖轴Z构建所述初始容积坐标系。
进一步地,通过上述步骤构建初始容积坐标系,并结合所述期望重建方向构建目标容积坐标系。可选地,目标容积坐标系的构建方法,包括:确定所述初始容积坐标系所属的重建容积;在所述重建容积内,基于所述期望重建方向,以垂直于所述期望重建方向确定第一当前平面;基于所述第一纵轴和所述第一竖轴构建第二当前平面,将所述第二当前平面与所述第一当前平面的交线中点作为目标容积坐标的第二坐标原点,并将所述期望重建方向作为所述目标容积坐标系的第二竖轴;在所述第一当前平面内,将所述第二坐标原点所属边长作为目标容积坐标系的第二纵轴,并以所述第二坐标原点为起点,将垂直于所述第二纵轴和所述第二竖轴所在平面的向量作为第二横轴。
其中,所述第一当前平面为上述三维结构模型的任意一个断层图像所在的平面。可选地,所述初始容积坐标系所属的重建容积的确定方法为:在第一纵轴Y正向坐标轴和负向坐标轴分别截取
Figure PCTCN2021118006-appb-000001
在第一横轴X和第一竖轴Z的正向坐标轴分别截取D,根据上述截取距离构建正方体,将构建的正方体作为所述重建容积。
示例性地,结合图11和图12解释目标容积坐标系的确定过程。获取期望重建方向,以垂直于期望重建方向确定第一当前平面,基于第一纵轴Y和第一竖轴Z构建第二当前平面,将第二当前平面与第一当前平面的交线中点作为目标容积坐标的第二坐标原点,将期望重建方向作为图12中的第二竖轴Z′;进一步地,在第一当前平面内,将第二坐标原点所属边长作为目标容积重建系的第二纵轴Y′,将以第二坐标原点为起点的且垂直于第二纵轴Y′和第二竖轴Z′的平面的向量作为第二横轴X′。
S1030,在目标容积坐标系下,根据期望重建方向对投影数据进行重建,生成目标容积重建图像。
可选地,可以采用迭代重建算法、滤波反投影重建算法、反投影滤波重建算法等方式对所述投影数据进行重建,生成目标容积重建图像。所述目标容积重建图像可以垂直于所述期望重建方向,还可以与期望重建方向呈其他角度。
可以理解的是,控制设备获取到不同扫描角度下的投影数据后,可以确定各扫描角度对 应的各期望重建方向,并根据各期望重建方向对投影数据进行重建,得到垂直于期望重建方向或者与期望重建方向呈其他角度的目标容积重建图像。相比于现有技术中因初始容积坐标系的竖轴方向不变,仅能在平行于探测器方向上得到分辨率较好的容积重建图像,不能获得其他方向上的信息,本实施例可以通过改变期望重建方向,根据不同的期望重建方向进行重建,得到多个方向上分辨率较好容积重建图像,有利于用户对多个方向上的容积重建图像进行分析,并进行目标定位。
本实施例提供的技术方案,通过获取各扫描角度下的投影数据,基于初始容积坐标系和期望重建方向,构建目标容积坐标系,在所述目标容积坐标系下,根据所述期望重建方向对所述投影数据进行重建,生成目标容积重建图像。解决了现有技术中只能在平行于探测器的方向上获得一张较好的图像,而不能获得其他方向上的信息的问题。通过设置不同的期望重建方向,达到不同的期望重建方向进行重建,得到各个方向上的分辨率较好容积重建图像的目的,有利于用户对多个重建方向上的容积重建图像进行有效分析,并进行目标定位。
图13为本申请实施例提供的一种容积重建图像生成方法的流程示意图。本实施例的技术方案在上述实施例的基础上进行了细化。可选地,所述根据所述期望重建方向对所述投影数据进行重建,生成目标容积重建图像,包括:基于所述初始容积坐标系和所述目标容积坐标系的角度差,以及所述初始容积重建图像在所述初始容积坐标系的像素范围,确定所述目标容积坐标系下的像素范围;在所述目标容积坐标系下的像素范围内,根据所述期望重建方向对所述投影数据进行重建,生成所述目标容积重建图像。在该方法实施例中未详尽描述的部分请参考上述实施例。具体参见图13所示,该方法可以包括如下步骤:
S1310,获取各扫描角度下的投影数据。
S1320,基于初始容积坐标系和期望重建方向,构建目标容积坐标系。
S1330,基于初始容积坐标系和目标容积坐标系的角度差,以及初始容积重建图像在初始容积坐标系的像素范围,确定目标容积坐标系下的像素范围。
如前述实施例所述的方法确定了初始容积坐标系和目标容积坐标系,控制设备分别确定第一坐标原点O和第二坐标原点O’的坐标信息、第一横轴X与第二横轴X′的之间的角度差、第一纵轴Y与第二纵轴Y′之间的角度差以及第一竖轴Z与第二竖轴Z′的之间的角度差,即确定初始容积坐标系和目标容积坐标系的角度差。另外,控制设备根据初始容积重建图像的像素范围,确定初始容积重建图像在初始容积坐标系的各个方向上的像素范围,并进一步结合初始容积坐标系和目标容积坐标系的角度差,确定初始容积重建图像在确定目标容积坐标系下的像素范围。
结合图11和图12所示的示意图具体地解释,图11和图12所示的重建容积的各个边长的尺寸分别为VolumeX,VolumeY和VolumeZ,初始容积坐标系的第一坐标原点O为重建容积(volume)的第一纵轴Y所属边长的中点,初始容积重建图像在初始容积坐标系的第一横轴X 方向上的像素范围为PixelSizeX,初始容积重建图像在初始容积坐标系的第一纵轴Y方向上的像素范围为PixelSizeY,初始容积重建图像在初始容积坐标系的第一竖轴Z方向上的像素范围为PixelSizeZ。基于此,所述第一横轴定义为:ReconCoordinateX,[0:VolumeX]*PixelSizeX,所述第一纵轴定义为:ReconCoordinateY,
Figure PCTCN2021118006-appb-000002
所述第一竖轴定义为:ReconCoordinateZ,[0:VolumeZ]*PixelSizeZ;其中,所述ReconCoordinateX表示所述初始容积坐标系的X轴,所述ReconCoordinateY表示所述初始容积重建坐标系的Y轴,所述ReconCoordinateZ表示所述初始容积重建坐标系的Z轴。
相应的,所述第二坐标原点O’为第二当前平面与第一当前平面的交线中点,所述第二横轴X′定义为:ReconCoordinateXNew,[0:VolumeX/cosβ]*PixelSizeX,所述第二纵轴Y′定义为:ReconCoordinateYNew,
Figure PCTCN2021118006-appb-000003
Figure PCTCN2021118006-appb-000004
所述期望重建方向Z′(即第二竖轴)定义为:
ReconCoordinateZNew(z)=ReconCoordinateYNew(y)*sinα+ReconCoordinateYNew(x)*sinβ+ReconCoordinateZ(z)/cosθ。
其中,α表示第二纵轴Y′与第一纵轴Y之间的夹角,β表示第二横轴X′与第一横轴X之间的夹角,θ表示第二竖轴Z′与第一竖轴Z之间的夹角;所述ReconCoordinateXNew表示所述目标容积坐标系的X轴(即第二横轴X′),所述ReconCoordinateYNew表示所述目标容积坐标系的Y轴(即第二纵轴Y′),所述ReconCoordinateZNew(z)表示所述目标容积坐标系的Z轴(即第二竖轴Z′);[0:VolumeX/cosβ]*PixelSizeX表示目标容积坐标系的X轴(即第二横轴X′)上的像素范围,
Figure PCTCN2021118006-appb-000005
Figure PCTCN2021118006-appb-000006
表示为目标容积坐标系的Y轴(即第二纵轴Y′)上的像素范围,目标容积坐标系的Z轴(即第二竖轴Z′)上的像素范围为0。
S1340,在目标容积坐标系下的像素范围内,根据期望重建方向对投影数据进行重建,生成目标容积重建图像。
通过前述步骤确定了目标容积坐标系包括的像素范围,可以在像素范围并采用迭代重建算法、滤波反投影重建算法、反投影滤波重建算法等方式对投影数据进行重建生成目标容积重建图像。所述目标容积重建图像可以垂直于所述期望重建方向,还可以与期望重建方向呈其他角度。
可选地,在对所述投影数据进行重建之前,还包括:对所述投影数据进行预处理。其中, 所述预处理包括图像分割、灰度值变换以及窗宽窗位变换中的至少一种。所述图像分割可以采用基于阈值的图像分割、区域生长等方式,用于对投影数据进行筛选,以减少容积重建的计算量;所述灰度值变换可以采用图像反转、对数变换以及伽马变换等方式,用于提高投影数据的对比度,有利于提高容积重建图像的对比度,便于用户分析重建后的图像;窗宽窗位变换可以采用增大窗宽、减少窗宽或者变换窗位中心点等方式,可以去除投影数据的噪声数据,有利于提高投影数据的重建效率。
可选地,确定了目标容积重建图像之后,控制设备还可以获取目标容积重建图像的图像渲染指令;基于所述图像渲染指令,对所述目标容积重建图像的像素点进行渲染,并将渲染后的所述目标容积重建图像进行显示。例如,将用户确定的感兴趣区内的像素点的像素值提高,以突出显示感兴趣的图像,便于用户分析图像。
本实施例提供的技术方案,通过基于所述初始容积坐标系和所述目标容积坐标系的角度差,以及所述初始容积重建图像在所述初始容积坐标系的像素范围,确定所述目标容积坐标系下的像素范围,在所述目标容积坐标系下的像素范围内,根据所述期望重建方向对所述投影数据进行重建,生成所述目标容积重建图像。可以准确确定目标容积坐标系下的像素范围,进一步利于生成各个方向上的分辨率较好容积重建图像。
实施例五
图14是根据本申请一些实施例所示的动态透视系统的示例性模块图。如图14所示,该动态透视系统1400可以包括拍摄模块1410和显示模块1420。在一些实施例中,该动态透视系统1400可以由图1中所示的动态透视系统100(如,处理设备140)实现。
拍摄模块1410可以用于在一个拍摄周期内对主体进行拍摄,在所述拍摄过程中获取射线源在第一能量下照射所述主体的第一透视数据,以及获取所述射线源在不同于所述第一能量的第二能量下照射所述主体的第二透视数据。在一些实施例中,获取模块1410还可以用于在多个连续的拍摄周期对所述主体进行所述拍摄。
显示模块1420可以用于根据所述多个连续拍摄周期的每个拍摄周期中获取的所述第一透视数据和所述第二透视数据,显示所述主体的动态图像。
在本申请的另一些实施例中,提供了一种动态透视装置,包括至少一个处理设备140以及至少一个存储设备150;所述至少一个存储设备150用于存储计算机指令,所述至少一个处理设备140用于执行所述计算机指令中的至少部分指令以实现如上所述的动态透视方法200。
在本申请的又一些实施例中,提供了一种计算机可读存储介质,所述存储介质存储计算机指令,当所述计算机指令被计算机(如处理设备140)读取时,处理设备140执行如上所述的动态透视方法200。
需要注意的是,以上对于动态透视系统及其装置/模块的描述,仅为描述方便,并不能把 本申请限制在所举实施例范围之内。可以理解,对于本领域的技术人员来说,在了解该系统的原理后,可能在不背离这一原理的情况下,对各个装置/模块进行任意组合,或者构成子系统与其他装置/模块连接。例如,图14中披露的拍摄模块1410和显示模块1420可以是一个装置(例如,处理设备140)中的不同模块,也可以是一个模块实现上述的两个或两个以上模块的功能。例如,拍摄模块1410和显示模块1420可以是两个模块,也可以是一个模块同时具有拍摄和显示动态图像的功能。又例如,各个模块可以分别具有各自的存储模块。再例如,各个模块可以共用一个存储模块。还例如,拍摄模块1410可以包括第一拍摄子模块和第二拍摄子模块,其中,第一拍摄子模块可以用于在所述拍摄过程中获取射线源在第一能量下照射所述主体的第一透视数据,第二拍摄子模块可以用于获取所述射线源在第二能量下照射所述主体的第二透视数据。以诸如此类的变形,均在本申请的保护范围之内。
本申请实施例可能带来的有益效果包括但不限于:(1)帮助医护人员快速了解主体的病灶和/或各组织器官在多个拍摄周期内发生的动态变化;(2)能够利用双能减影技术获取所需要的图像类别。需要说明的是,不同实施例可能产生的有益效果不同,在不同的实施例里,可能产生的有益效果可以是以上任意一种或几种的组合,也可以是其他任何可能获得的有益效果。
实施例六
图15是本申请实施例四提供的一种医疗成像设备的成像定位装置的示意图。本实施例可适用于对目标部位进行定位的情况,该装置可采用软件和/或硬件的方式实现,该成像定位装置包括:虚拟人体模型获取模块1510、人体内部图像显示模块1520、第一目标成像位置确定模块1530和第二目标成像位置确定模块15150。
其中,虚拟人体模型获取模块1510,用于获取与成像对象对应的虚拟人体模型,并获取与用户操作指令对应的第一位置信息;
人体内部图像显示模块1520,用于基于所述第一位置信息,确定所述虚拟人体模型中与所述第一位置信息对应的人体内部图像,将所述人体内部图像进行显示;
第一目标成像位置确定模块1530,用于如果所述第一位置信息对应的所述人体内部图像与目标拍摄位置对应,则根据所述第一位置信息确定所述医疗成像设备对应的目标成像位置;
第二目标成像位置确定模块1540,用于如果所述第一位置信息对应的所述人体内部图像与目标拍摄位置不对应,则继续获取不同于所述第一位置信息的第二位置信息,并根据所述第二位置信息对应的人体内部图像确定与医疗成像设备对应的目标成像位置。
本实施例的技术方案,通过基于虚拟人体模型确定与用户输入的第一位置信息对应的人体内部图像,并通过与定位指令对应的人体内部图像得到目标信息,解决了成像操作过程中辐射对人体的损伤问题,使得用户在成像定位的过程中可任意设置参数,且不会对人体造成任何损伤,进而也保证了定位的准确度。
在上述技术方案的基础上,可选的,第一位置信息包括虚拟人体模型对应的第一模型信息或医疗成像设备对应的第一设备信息,其中,第一模型信息与第一设备信息之间存在关联关系。
在上述技术方案的基础上,可选的,关联关系包括位置关联关系,该装置还包括:
位置关联关系确定模块,用于将所述医疗成像设备与成像对象的相对位置关系转换为所述医疗成像设备与虚拟人体模型的位置关联关系;其中,所述位置关联关系用于表征第一模型信息与第一设备信息之间的位置参数的关系。
在上述技术方案的基础上,可选的,关联关系还包括视野关联关系,该装置还包括:
视野关联关系获取模块,用于获取第一模型信息与第一设备信息之间的视野关联关系;其中,视野关联关系用于表征第一模型信息与第一设备信息之间的视野参数的关系。
在上述技术方案的基础上,可选的,虚拟人体模型获取模块1510包括:
当所述第一位置信息为第一模型信息时,将所述虚拟人体模型显示在交互界面上,并在所述虚拟人体模型上显示与用户操作指令对应的第一模型信息。
在上述技术方案的基础上,可选的,第一模型信息包括图形化标记。
在上述技术方案的基础上,可选的,所述图形化标记基于用户操作指令执行选择、移动、缩小和放大中至少一项操作。
在上述技术方案的基础上,可选的,该装置还包括:
当所述第一位置信息为第一模型信息时,根据所述第一模型信息和所述关联关系,确定医疗成像设备对应的第一成像位置,并控制所述医疗成像设备移动到第一成像位置。
在上述技术方案的基础上,可选的,第一目标成像位置确定模块1530包括:
第一目标设备信息确定单元,用于当所述第一位置信息为第一模型信息时,根据所述第一模型信息和所述关联关系,确定所述医疗成像设备对应的目标成像位置,并可以控制所述医疗成像设备移动到目标成像位置。
在上述技术方案的基础上,可选的,该装置还包括:
成像操作执行模块,用于控制所述医疗成像设备中的成像组件基于视野参数执行成像操作;其中,所述视野参数包括源像距、源物距和放大率中至少一种。
在上述技术方案的基础上,可选的,人体内部图像显示模块1520具体用于:
当所述第一位置信息为第一设备信息时,根据所述第一设备信息和所述关联关系,确定虚拟人体模型对应的第一模型信息,并基于所述第一模型信息确定人体内部图像。
在上述技术方案的基础上,可选的,第一目标成像位置确定模块1530包括:
第二目标设备信息确定单元,用于当所述第一位置信息为第一设备信息时,将所述第一设备信息中的成像位置作为与所述医疗成像设备对应的目标成像位置。
在上述技术方案的基础上,可选的,虚拟人体模型获取模块1510具体用于:
根据获取到的与成像对象对应的身高数据,选取与身高数据对应的虚拟人体模型;其中,虚拟人体模型包括人体体型模型和人体内部模型。
在上述技术方案的基础上,可选的,人体内部模型包括血管模型、器官模型、骨骼模型和肌肉模型中至少一种。
在上述技术方案的基础上,可选的,医疗成像设备包括数字X射线摄影设备、C形臂X射线设备、乳腺机、计算机断层射线摄影设备、磁共振设备、正电子发射断层成像PET设备、正电子发射断层成像及计算机断层成像PET-CT设备、正电子发射断层成像及磁共振成像PET-MR设备或放射治疗成像RT设备。
本申请实施例所提供的成像定位装置可以用于执行本申请实施例所提供的成像定位方法,具备执行方法相应的功能和有益效果。
值得注意的是,上述成像定位装置的实施例中,所包括的各个单元和模块只是按照功能逻辑进行划分的,但并不局限于上述的划分,只要能够实现相应的功能即可;另外,各功能单元的具体名称也只是为了便于相互区分,并不用于限制本申请的保护范围。
图16是本申请实施例五提供的一种医疗成像设备的结构示意图,本申请实施例为本申请上述实施例的成像定位方法的实现提供服务,可配置上述实施例中的成像定位装置。图16示出了适于用来实现本申请实施方式的示例性电子设备16的框图。图16显示的电子设备16仅仅是一个示例,不应对本申请实施例的功能和使用范围带来任何限制。
如图16所示,电子设备16以通用计算设备的形式表现。电子设备16的组件可以包括但不限于:一个或者多个处理器或者处理单元161,系统存储器162,连接不同系统组件(包括系统存储器162和处理单元161)的总线163。
总线163表示几类总线结构中的一种或多种,包括存储器总线或者存储器控制器,外围总线,图形加速端口,处理器或者使用多种总线结构中的任意总线结构的局域总线。举例来说,这些体系结构包括但不限于工业标准体系结构(ISA)总线,微通道体系结构(MAC)总线,增强型ISA总线、视频电子标准协会(VESA)局域总线以及外围组件互连(PCI)总线。
电子设备16典型地包括多种计算机系统可读介质。这些介质可以是任何能够被电子设备16访问的可用介质,包括易失性和非易失性介质,可移动的和不可移动的介质。
系统存储器162可以包括易失性存储器形式的计算机系统可读介质,例如随机存取存储器(RAM)1621和/或高速缓存存储器1622。电子设备16可以进一步包括其它可移动/不可移动的、易失性/非易失性计算机系统存储介质。仅作为举例,存储系统1623可以用于读写不可移动的、非易失性磁介质(图16未显示,通常称为“硬盘驱动器”)。尽管图16中未示出,可以提供用于对可移动非易失性磁盘(例如“软盘”)读写的磁盘驱动器,以及对可移动非易失性光盘(例如CD-ROM,DVD-ROM或者其它光介质)读写的光盘驱动器。在这些情况下,每个驱动器可以通过一个或者多个数据介质接口与总线163相连。系统存储器162可以包括 至少一个程序产品,该程序产品具有一组(例如至少一个)程序模块,这些程序模块被配置以执行本申请各实施例的功能。
具有一组(至少一个)程序模块1624的程序/实用工具1625,可以存储在例如存储器162中,这样的程序模块1624包括但不限于操作系统、一个或者多个应用程序、其它程序模块以及程序数据,这些示例中的每一个或某种组合中可能包括网络环境的实现。程序模块1624通常执行本申请所描述的实施例中的功能和/或方法。
电子设备16也可以与一个或多个外部设备164(例如键盘、指向设备、显示器1641等)通信,还可与一个或者多个使得用户能与该电子设备16交互的设备通信,和/或与使得该电子设备16能与一个或多个其它计算设备进行通信的任何设备(例如网卡,调制解调器等等)通信。这种通信可以通过输入/输出(I/O)接口165进行。并且,电子设备16还可以通过网络适配器166与一个或者多个网络(例如局域网(LAN),广域网(WAN)和/或公共网络,例如因特网)通信。如图16所示,网络适配器166通过总线163与电子设备16的其它模块通信。应当明白,尽管图中未示出,可以结合电子设备16使用其它硬件和/或软件模块,包括但不限于:微代码、设备驱动器、冗余处理单元、外部磁盘驱动阵列、RAID系统、磁带驱动器以及数据备份存储系统等。
处理单元161通过运行存储在系统存储器162中的程序,从而执行各种功能应用以及数据处理,例如实现本申请实施例所提供的成像定位方法。
在一个实施例中,电子设备16可以是终端设备,配置有成像定位装置,该终端设备与照射系统通讯连接。在另一实施例中,电子设备16也可以是照射系统,配置有成像定位装置。
通过上述设备,解决了辐射对人体的损伤问题,在保证实现成像定位的同时,使得用户在成像定位的过程中可任意设置参数,且不会对人体造成任何损伤,进而也可提高定位的准确度。
本申请实施例五还提供了一种包含计算机可执行指令的存储介质,计算机可执行指令在由计算机处理器执行时用于执行一种成像定位方法,该方法包括:
获取与成像对象对应的虚拟人体模型,并获取与用户操作指令对应的第一位置信息;
基于所述第一位置信息,确定所述虚拟人体模型中与所述第一位置信息对应的人体内部图像,将所述人体内部图像进行显示;
如果所述第一位置信息对应的所述人体内部图像与目标拍摄位置对应,则根据所述第一位置信息确定所述医疗成像设备对应的目标成像位置;
如果所述第一位置信息对应的所述人体内部图像与目标拍摄位置不对应,则继续获取不同于所述第一位置信息的第二位置信息,并根据所述第二位置信息对应的人体内部图像确定与医疗成像设备对应的目标成像位置。
本申请实施例的计算机存储介质,可以采用一个或多个计算机可读的介质的任意组合。 计算机可读介质可以是计算机可读信号介质或者计算机可读存储介质。计算机可读存储介质例如可以是但不限于电、磁、光、电磁、红外线、或半导体的系统、装置或器件,或者任意以上的组合。计算机可读存储介质的更具体的例子(非穷举的列表)包括:具有一个或多个导线的电连接、便携式计算机磁盘、硬盘、随机存取存储器(RAM)、只读存储器(ROM)、可擦式可编程只读存储器(EPROM或闪存)、光纤、便携式紧凑磁盘只读存储器(CD-ROM)、光存储器件、磁存储器件、或者上述的任意合适的组合。在本文件中,计算机可读存储介质可以是任何包含或存储程序的有形介质,该程序可以被指令执行系统、装置或者器件使用或者与其结合使用。
计算机可读的信号介质可以包括在基带中或者作为载波一部分传播的数据信号,其中承载了计算机可读的程序代码。这种传播的数据信号可以采用多种形式,包括但不限于电磁信号、光信号或上述的任意合适的组合。计算机可读的信号介质还可以是计算机可读存储介质以外的任何计算机可读介质,该计算机可读介质可以发送、传播或者传输用于由指令执行系统、装置或者器件使用或者与其结合使用的程序。
计算机可读介质上包含的程序代码可以用任何适当的介质传输,包括但不限于无线、电线、光缆、RF等等,或者上述的任意合适的组合。
可以以一种或多种程序设计语言或其组合来编写用于执行本申请操作的计算机程序代码,程序设计语言包括面向对象的程序设计语言,诸如Java、Smalltalk、C++,还包括常规的过程式程序设计语言,诸如“C”语言或类似的程序设计语言。程序代码可以完全地在用户计算机上执行、部分地在用户计算机上执行、作为一个独立的软件包执行、部分在用户计算机上部分在远程计算机上执行、或者完全在远程计算机或服务器上执行。在涉及远程计算机的情形中,远程计算机可以通过任意种类的网络包括局域网(LAN)或广域网(WAN),连接到用户计算机,或者,可以连接到外部计算机(例如利用因特网服务提供商来通过因特网连接)。
当然,本申请实施例所提供的一种包含计算机可执行指令的存储介质,其计算机可执行指令不限于如上的方法操作,还可以执行本申请任意实施例所提供的成像定位方法中的相关操作。
实施例七
图17是本申请实施例三提供的一种图像显示装置的示意图。本实施例可适用于使用血管造影设备进行成像,并将成像结果进行显示的情况,该装置可采用软件和/或硬件的方式实现,该装置可以配置于血管造影设备中。该图像显示装置包括:第一血管图像显示模块1710、第二视野参数确定模块1720和第二血管图像显示模块1730。
其中,第一血管图像显示模块1710,用于获取血管造影设备基于第一视野参数采集到的成像对象的第一血管图像,并显示第一血管图像;
第二视野参数确定模块1720,用于基于接收到的参数调整指令和第一视野参数确定第二 视野参数;其中,参数调整指令中包含缩放调整比例;
第二血管图像显示模块1730,用于基于第二视野参数确定第二血管图像,并将第二血管图像与第一血管图像进行同步显示。
本实施例的技术方案,通过在显示第一血管图像的同时,基于接收到的参数调整指令确定第二血管图像,并将第二血管图像与第一血管图像同步显示,解决了对不同视野的扫描图像进行反复切换显示的问题,降低了重复成像的次数和辐射量,提高了血管造影设备的使用寿命,进而也提高了血管造影的诊断效率,降低了反复切换带来的血管造影的诊断结果的误差。
在上述技术方案的基础上,可选的,第二视野参数确定模块1720包括:
第二放大率确定单元,用于基于缩放调整比例和第一视野参数中的第一放大率,确定第二视野参数中的第二放大率。
在上述技术方案的基础上,可选的,第二视野参数确定模块1720包括:
第二视野范围确定单元,用于基于缩放调整比例和第一视野参数中的第一视野范围,确定第二视野参数中的第二视野范围。
在上述技术方案的基础上,可选的,成像对象包括血管和插入到血管中的导管,第二视野参数确定模块1720包括:
第二视野中心位置确定单元,用于当缩放调整比例大于一时,获取第一血管图像中与导管对应的导管顶端图像,将导管顶端图像对应的视野中心位置和视野范围分别作为第二视野参数中的第二视野中心位置和第二视野范围。
在上述技术方案的基础上,可选的,第二血管图像显示模块1730包括:
第二血管图像确定单元,用于获取血管造影设备基于第二视野参数采集到的第二血管图像;或,当缩放调整比例大于一时,基于第二视野参数中的第二视野中心位置和第二视野范围确定第一血管图像中的参考血管图像,并基于第二视野参数中的第二放大率和参考血管图像确定第二血管图像。
在上述技术方案的基础上,可选的,第二血管图像显示模块1730包括:
第二血管图像第一显示单元,用于在第一血管图像对应的显示设备上,将第二血管图像在不同于第一血管图像的显示区域进行显示;或,将第二血管图像在不同于第一血管图像的显示设备上进行显示。
在上述技术方案的基础上,可选的,第二血管图像显示模块1730包括:
更新第二血管图像显示单元,用于当第一血管图像对应的第一视野参数发生变化时,获取血管造影设备基于变化后的第一视野参数采集到的更新第一血管图像;基于变化后的第一视野参数和参数调整指令确定更新第二视野参数,并将基于更新第二视野参数确定的更新第二血管图像与更新第一血管图像进行同步显示。
本申请实施例所提供的图像显示装置可以用于执行本申请实施例所提供的图像显示方法,具备执行方法相应的功能和有益效果。
值得注意的是,上述图像显示装置的实施例中,所包括的各个单元和模块只是按照功能逻辑进行划分的,但并不局限于上述的划分,只要能够实现相应的功能即可;另外,各功能单元的具体名称也只是为了便于相互区分,并不用于限制本申请的保护范围。
实施例八
图18是本申请实施例提供的一种血管造影设备的结构示意图,本申请实施例为本申请上述实施例任一所述的图像显示方法的实现提供服务,可配置本申请实施例中的图像显示装置。
该血管造影设备包括成像组件180、至少一个显示设备181和控制器182,其中,成像组件180,用于基于第一视野参数采集第一血管图像;显示设备181,用于显示第一血管图像和第二血管图像;控制器182包括一个或多个处理器以及存储有一个或多个程序的存储器,当一个或多个程序被一个或多个处理器执行时,使得一个或多个处理器实现如上述实施例任一所述的图像显示方法。
在一个实施例中,可选的,成像组件包括X射线源和探测器。其中,X射线源,用于发射X射线;探测器,用于根据接收到的X射线生成第一血管图像。
在一个实施例中,可选的,X射线源包括X射线高压发生器和X射线球管。其中,X射线高压发生器的类型有工频高压发生器和高频逆变高压发生器,目前应用较普遍的是高频逆变高压发生器,高频逆变高压发生器又分为连续式高频逆变高压发生器和计算机控制的脉冲式高频逆变高压发生器,其中,连续式高频逆变高压发生器在脉冲采集时的脉冲波需要依靠X射线球管中的栅极开关控制产生。其中,X射线球管可接收高频高压输出从而激发产生X射线。X射线球管的规格参数包括结构参数和电参数两种。前者指X射线球管结构所决定的各种参数,如靶面的倾斜角度、有效焦点、外形尺寸、重量、管壁的滤过当量、阳极转速、工作温度和冷却形式等。电参数是指X射线球管电性能的规格数据,如灯丝加热电压和电流、最大管电压、管电流、最长曝光时间、最大允许功率和阳极热容量等。X射线球管的特点是小焦点、高热容量容量、高负荷、高转速、散热率高。示例性的,可以采用液态金属轴承技术制作X射线球管,可以避免普通球管在高转速情况下轴承的磨损,不仅散热效率增加,更提高X射线球管承受连续负荷的能力,提高了球管的寿命,同时降低了设备的本征噪声,提高图像的信噪比。
在一个实施例中,可选的,探测器包括平板探测器。
其中,具体的,当血管造影设备包括至少两个显示设备181时,可以根据用户的使用习惯,选择各显示设备之间的摆放关系和各显示设备的放置位置。当在至少两个显示设备181上显示不同放大率的血管图像时,可以预先设置不同显示设备181对应的血管图像的放大率等级,示例性的,显示设备A、显示设备B和显示设备C分别对应的放大率等级依次降低, 即在显示设备A上显示放大率最大的血管图像,在显示设备C上显示放大率最小的血管图像。这样设置的好处在于,如果不设置血管图像与显示设备之间的对应关系,在显示设备很多的情况下,用户在想要查看目标血管图像时需要逐一查看每个显示设备上的血管图像以找到所需的目标血管图像,从而增加了用户的查找目标血管图像的负担。而设置血管图像与显示设备之间的对应关系,用户可遵循该对应关系进行目标血管图像的查找,从而大大降低用户查找目标血管图像的负担,进而提高血管造影的诊断效率。
在一个实施例中,可选的,血管造影设备还包括导管,用于将造影剂注射到靶血管。
其中,控制器中的存储器作为一种计算机可读存储介质,可用于存储软件程序、计算机可执行程序以及模块,如本申请实施例中的图像显示方法对应的程序指令/模块(例如,第一血管图像显示模块1710、第二视野参数确定模块1720和第二血管图像显示模块1730)。处理器通过运行存储在存储器中的软件程序、指令以及模块,从而执行血管造影设备的各种功能应用以及数据处理,即实现上述的图像显示方法。
存储器可主要包括存储程序区和存储数据区,其中,存储程序区可存储操作系统、至少一个功能所需的应用程序;存储数据区可存储根据终端的使用所创建的数据等。此外,存储器可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件、闪存器件、或其他非易失性固态存储器件。在一些实例中,存储器可进一步包括相对于处理器远程设置的存储器,这些远程存储器可以通过网络连接至血管造影设备。上述网络的实例包括但不限于互联网、企业内部网、局域网、移动通信网及其组合。
在一个实施例中,可选的,血管造影设备还包括输入装置,用于接收输入的数字或字符信息,以及产生与血管造影设备的用户设置以及功能控制有关的键信号输入。
通过上述血管造影设备,解决了对不同视野的扫描图像进行反复切换显示的问题,降低了重复成像的次数和辐射量,提高了血管造影设备的使用寿命,进而也提高了血管造影的诊断效率,降低了反复切换带来的血管造影的诊断结果的误差。
本申请实施例还提供了一种包含计算机可执行指令的存储介质,计算机可执行指令在由计算机处理器执行时用于执行一种图像显示方法,该方法包括:
获取血管造影设备基于第一视野参数采集到的成像对象的第一血管图像,并显示第一血管图像;
基于接收到的参数调整指令以及第一视野参数确定第二视野参数;其中,参数调整指令中包含缩放调整比例;
基于第二视野参数确定第二血管图像,并将第二血管图像与第一血管图像进行同步显示。
本申请实施例的计算机存储介质,可以采用一个或多个计算机可读的介质的任意组合。计算机可读介质可以是计算机可读信号介质或者计算机可读存储介质。计算机可读存储介质例如可以是但不限于电、磁、光、电磁、红外线、或半导体的系统、装置或器件,或者任意以 上的组合。计算机可读存储介质的更具体的例子(非穷举的列表)包括:具有一个或多个导线的电连接、便携式计算机磁盘、硬盘、随机存取存储器(RAM)、只读存储器(ROM)、可擦式可编程只读存储器(EPROM或闪存)、光纤、便携式紧凑磁盘只读存储器(CD-ROM)、光存储器件、磁存储器件、或者上述的任意合适的组合。在本文件中,计算机可读存储介质可以是任何包含或存储程序的有形介质,该程序可以被指令执行系统、装置或者器件使用或者与其结合使用。
计算机可读的信号介质可以包括在基带中或者作为载波一部分传播的数据信号,其中承载了计算机可读的程序代码。这种传播的数据信号可以采用多种形式,包括但不限于电磁信号、光信号或上述的任意合适的组合。计算机可读的信号介质还可以是计算机可读存储介质以外的任何计算机可读介质,该计算机可读介质可以发送、传播或者传输用于由指令执行系统、装置或者器件使用或者与其结合使用的程序。
计算机可读介质上包含的程序代码可以用任何适当的介质传输,包括但不限于无线、电线、光缆、RF等等,或者上述的任意合适的组合。
可以以一种或多种程序设计语言或其组合来编写用于执行本申请操作的计算机程序代码,程序设计语言包括面向对象的程序设计语言,诸如Java、Smalltalk、C++,还包括常规的过程式程序设计语言,诸如“C”语言或类似的程序设计语言。程序代码可以完全地在用户计算机上执行、部分地在用户计算机上执行、作为一个独立的软件包执行、部分在用户计算机上部分在远程计算机上执行、或者完全在远程计算机或服务器上执行。在涉及远程计算机的情形中,远程计算机可以通过任意种类的网络包括局域网(LAN)或广域网(WAN),连接到用户计算机,或者,可以连接到外部计算机(例如利用因特网服务提供商来通过因特网连接)。
当然,本申请实施例所提供的一种包含计算机可执行指令的存储介质,其计算机可执行指令不限于如上的方法操作,还可以执行本申请任意实施例所提供的图像显示方法中的相关操作。
实施例九
图19为本申请实施例提供的一种容积重建图像生成装置的结构示意图。参见图19所示,该容积重建图像生成装置包括:投影数据获取模块1910、目标容积坐标系生成模块1920以及目标容积重建图像生成模块1930。
其中,投影数据获取模块1910,用于获取各扫描角度下的投影数据;目标容积坐标系生成模块1920,用于基于初始容积坐标系和期望重建方向,构建目标容积坐标系;目标容积重建图像生成模块1930,用于在所述目标容积坐标系下,根据所述期望重建方向对所述投影数据进行重建,生成目标容积重建图像。
在上述各技术方案的基础上,该装置还包括:初始容积坐标系确定模块;其中,初始容积坐标系确定模块,用于对所述投影数据进行容积重建,得到初始容积重建图像;基于所述 初始容积重建图像确定所述初始容积坐标系。
在上述各技术方案的基础上,所述初始容积坐标系的第一坐标原点为初始容积重建图像所对应重建容积的第一纵轴所属边长的任一位置点;第一横轴在所述第一纵轴所属的平面内,以所述第一坐标原点为起点,且垂直于所述第一纵轴;第一竖轴以所述第一坐标原点为起点,垂直于所述第一横轴和所述第一纵轴所在平面。
在上述各技术方案的基础上,目标容积坐标系生成模块1920还用于,确定所述初始容积坐标系所属的重建容积;
在所述重建容积内,基于所述期望重建方向,以垂直于所述期望重建方向确定第一当前平面;
基于所述第一纵轴和所述第一竖轴构建第二当前平面,将所述第二当前平面与所述第一当前平面的交线中点作为目标容积坐标的第二坐标原点,并将所述期望重建方向作为所述目标容积坐标系的第二竖轴;
在所述第一当前平面内,将所述第二坐标原点所属边长作为目标容积坐标系的第二纵轴,并以所述第二坐标原点为起点,将垂直于所述第二纵轴和所述第二竖轴所在平面的向量作为第二横轴。
在上述各技术方案的基础上,目标容积重建图像生成模块1930还用于,基于所述初始容积坐标系和所述目标容积坐标系的角度差,以及所述初始容积重建图像在所述初始容积坐标系的像素范围,确定所述目标容积坐标系下的像素范围;
在所述目标容积坐标系下的像素范围内,根据所述期望重建方向对所述投影数据进行重建,生成所述目标容积重建图像。
在上述各技术方案的基础上,该装置还包括:预处理模块;其中,所述预处理模块,用于对所述投影数据进行预处理。
在上述各技术方案的基础上,该装置还包括:渲染模块;其中,所述渲染模块,用于获取目标容积重建图像的图像渲染指令;
基于所述图像渲染指令,对所述目标容积重建图像的像素点进行渲染,并将渲染后的所述目标容积重建图像进行显示。
本实施例提供的技术方案,通过获取各扫描角度下的投影数据,基于初始容积坐标系和期望重建方向,构建目标容积坐标系,在所述目标容积坐标系下,根据所述期望重建方向对所述投影数据进行重建,生成目标容积重建图像。解决了现有技术中只能在平行于探测器的方向上获得一张较好的图像,而不能获得其他方向上的信息的问题。通过设置不同的期望重建方向,达到在不同的期望重建方向进行重建,得到各个方向上的分辨率较好容积重建图像的目的,有利于用户对多个重建方向上的的容积重建图像进行有效分析,并进行目标定位的效果。
图20为本申请实施例提供的一种容积重建图像生成系统的结构示意图。参见图20所示,该容积重建图像生成系统包括:控制设备1和图像采集设备2。其中,所述图像采集设备2,用于对扫描对象在所述各扫描角度下进行扫描,以得到各所述扫描角度下的投影数据。图21示出了适用于实现本申请实施方式的示例性图像采集设备2的框图。图21显示的图像采集设备2仅仅是一个示例,不应对本申请实施例的功能和使用范围带来任何限制。
如图21所示,图像采集设备2以通用计算设备的形式表现。图像采集设备2的组件可以包括但不限于:一个或者多个处理器或者处理单元16,系统存储器28,连接不同系统组件(包括系统存储器28和处理单元16)的总线18。
总线18表示几类总线结构中的一种或多种,包括存储器总线或者存储器控制器,外围总线,图形加速端口,处理器或者使用多种总线结构中的任意总线结构的局域总线。举例来说,这些体系结构包括但不限于工业标准体系结构(ISA)总线,微通道体系结构(MAC)总线,增强型ISA总线、视频电子标准协会(VESA)局域总线以及外围组件互连(PCI)总线。
图像采集设备2典型地包括多种计算机系统可读介质。这些介质可以是任何能够被图像采集设备2访问的可用介质,包括易失性和非易失性介质,可移动的和不可移动的介质。
系统存储器28可以包括易失性存储器形式的计算机系统可读介质,例如随机存取存储器(RAM)30和/或高速缓存。图像采集设备2可以进一步包括其它可移动/不可移动的、易失性/非易失性计算机系统存储介质。仅作为举例,存储系统34可以用于读写不可移动的、非易失性磁介质(图7未显示,通常称为“硬盘驱动器”)。尽管图7中未示出,可以提供用于对可移动非易失性磁盘(例如“软盘”)读写的磁盘驱动器,以及对可移动非易失性光盘(例如CD-ROM,DVD-ROM或者其它光介质)读写的光盘驱动器。在这些情况下,每个驱动器可以通过一个或者多个数据介质接口与总线18相连。存储器28可以包括至少一个程序产品,该程序产品具有一组(例如容积重建图像生成装置的投影数据获取模块1910、目标容积坐标系生成模块1920以及目标容积重建图像生成模块1930)程序模块,这些程序模块被配置以执行本申请各实施例的功能。
具有一组(例如容积重建图像生成装置的投影数据获取模块1910、目标容积坐标系生成模块1920以及目标容积重建图像生成模块1930)程序模块46的程序/实用工具44,可以存储在例如存储器28中,这样的程序模块46包括但不限于操作系统、一个或者多个应用程序、其它程序模块以及程序数据,这些示例中的每一个或某种组合中可能包括网络环境的实现。程序模块46通常执行本申请所描述的实施例中的功能和/或方法。
图像采集设备2也可以与一个或多个外部设备14(例如键盘、指向设备、显示器24等)通信,还可与一个或者多个使得用户能与该图像采集设备2交互的设备通信,和/或与使得该图像采集设备2能与一个或多个其它计算设备进行通信的任何设备(例如网卡,调制解调器等等)通信。这种通信可以通过输入/输出(I/O)接口22进行。并且,图像采集设备2还可 以通过网络适配器20与一个或者多个网络(例如局域网(LAN),广域网(WAN)和/或公共网络,例如因特网)通信。如图所示,网络适配器20通过总线18与图像采集设备2的其它模块通信。应当明白,尽管图中未示出,可以结合图像采集设备2使用其它硬件和/或软件模块,包括但不限于:微代码、设备驱动器、冗余处理单元、外部磁盘驱动阵列、RAID系统、磁带驱动器以及数据备份存储系统等。
处理单元16通过运行存储在系统存储器28中的程序,从而执行各种功能应用以及数据处理,例如实现本申请实施例所提供的一种容积重建图像生成方法,包括:
获取各扫描角度下的投影数据;
基于初始容积坐标系和期望重建方向,构成目标容积坐标系;
在所述目标容积坐标系下,根据所述期望重建方向对所述投影数据进行重建,生成目标容积重建图像。
处理单元16通过运行存储在系统存储器28中的程序,从而执行各种功能应用以及数据处理,例如实现本申请实施例所提供的一种容积重建图像生成方法。
当然,本领域技术人员可以理解,处理器还可以实现本申请任意实施例所提供的一种容积重建图像生成方法的技术方案。
本申请实施例还提供了一种计算机可读存储介质,其上存储有计算机程序,该程序被处理器执行时实现如本申请实施例所提供的一种容积重建图像生成方法,包括:
获取各扫描角度下的投影数据;
基于初始容积坐标系和期望重建方向,构建目标容积坐标系;
在所述目标容积坐标系下,根据所述期望重建方向对所述投影数据进行重建,生成目标容积重建图像。当然,本申请实施例所提供的一种计算机可读存储介质,其上存储的计算机程序不限于如上的方法操作,还可以执行本申请任意实施例所提供的一种容积重建图像生成方法中的相关操作。
本申请实施例的计算机存储介质,可以采用一个或多个计算机可读的介质的任意组合。计算机可读介质可以是计算机可读信号介质或者计算机可读存储介质。计算机可读存储介质例如可以是——但不限于——电、磁、光、电磁、红外线、或半导体的系统、系统或器件,或者任意以上的组合。计算机可读存储介质的更具体的例子(非穷举的列表)包括:具有一个或多个导线的电连接、便携式计算机磁盘、硬盘、随机存取存储器(RAM)、只读存储器(ROM)、可擦式可编程只读存储器(EPROM或闪存)、光纤、便携式紧凑磁盘只读存储器(CD-ROM)、光存储器件、磁存储器件、或者上述的任意合适的组合。在本文件中,计算机可读存储介质可以是任何包含或存储程序的有形介质,该程序可以被指令执行系统、系统或者器件使用或者与其结合使用。
计算机可读的信号介质可以包括在投影数据、初始容积坐标系、期望重建方向以及目标 容积坐标系等,其中承载了计算机可读的程序代码。这种传播的投影数据、初始容积坐标系、期望重建方向以及目标容积坐标系等形式。计算机可读的信号介质还可以是计算机可读存储介质以外的任何计算机可读介质,该计算机可读介质可以发送、传播或者传输用于由指令执行系统、系统或者器件使用或者与其结合使用的程序。
计算机可读介质上包含的程序代码可以用任何适当的介质传输,包括——但不限于无线、电线、光缆、RF等等,或者上述的任意合适的组合。
可以以一种或多种程序设计语言或其组合来编写用于执行本申请操作的计算机程序代码,程序设计语言包括面向对象的程序设计语言—诸如Java、Smalltalk、C++,还包括常规的过程式程序设计语言—诸如”C”语言或类似的程序设计语言。程序代码可以完全地在用户计算机上执行、部分地在用户计算机上执行、作为一个独立的软件包执行、部分在用户计算机上部分在远程计算机上执行、或者完全在远程计算机或服务器上执行。在涉及远程计算机的情形中,远程计算机可以通过任意种类的网络——包括局域网(LAN)或广域网(WAN)—连接到用户计算机,或者,可以连接到外部计算机(例如利用因特网服务提供商来通过因特网连接)。
值得注意的是,上述容积重建图像生成装置的实施例中,所包括的各个模块只是按照功能逻辑进行划分的,但并不局限于上述的划分,只要能够实现相应的功能即可;另外,各功能单元的具体名称也只是为了便于相互区分,并不用于限制本申请的保护范围。

Claims (50)

  1. 一种C形臂设备的动态透视方法,其特征在于,包括:
    在一个拍摄周期内对主体进行拍摄,在所述拍摄过程中获取射线源在第一能量下照射所述主体的第一透视数据,以及获取所述射线源在不同于所述第一能量的第二能量下照射所述主体的第二透视数据;
    在多个连续的拍摄周期对所述主体进行所述拍摄;
    根据所述多个连续拍摄周期的每个拍摄周期中获取的所述第一透视数据和所述第二透视数据,显示所述主体的动态图像。
  2. 根据权利要求1所述的方法,其特征在于,所述第一能量的可用能量范围与所述第二能量的可用能量范围是部分交叠的或者间隔开的。
  3. 根据权利要求1所述的方法,其特征在于,所述第一能量和所述第二能量的能量差为20KeV~100KeV。
  4. 根据权利要求1所述的方法,其特征在于,所述主体的动态图像包括软组织图像、骨骼图像和/或至少包括软组织和骨骼的综合图像。
  5. 根据权利要求4所述的方法,其特征在于,所述综合图像由所述软组织图像和所述骨骼图像根据特定权重整合而成。
  6. 根据权利要求1所述的方法,其特征在于,所述根据所述多个连续拍摄周期的每个拍摄周期中获取的所述第一透视数据和所述第二透视数据,显示所述主体的动态图像包括:
    对所述多个连续拍摄周期的每个拍摄周期中获取的所述第一透视数据和所述第二透视数据减影处理获得每个拍摄周期的图像;
    基于所述多个连续拍摄周期的多个图像,确定并显示所述主体的动态图像。
  7. 根据权利要求1所述的方法,其特征在于,所述方法还包括:
    获取用户输入的指令,所述指令反映所述用户所选择的动态图像类别;
    所述显示所述主体的动态图像包括:基于所述指令显示所述主体的动态图像。
  8. 一种医疗成像设备的成像定位方法,其特征在于,包括:
    获取与成像对象对应的虚拟人体模型,并获取与用户操作指令对应的第一位置信息;
    基于所述第一位置信息,确定所述虚拟人体模型中与所述第一位置信息对应的人体内部图像,将所述人体内部图像进行显示;
    如果所述第一位置信息对应的所述人体内部图像与目标拍摄位置对应,则根据所述第一位置信息确定所述医疗成像设备对应的目标成像位置;
    如果所述第一位置信息对应的所述人体内部图像与目标拍摄位置不对应,则继续获取不同于所述第一位置信息的第二位置信息,并根据所述第二位置信息对应的人体内部图像确定与医疗成像设备对应的目标成像位置。
  9. 根据权利要8所述的方法,其特征在于,所述第一位置信息包括所述虚拟人体模型对应的第一模型信息或所述医疗成像设备对应的第一设备信息,其中,所述第一模型信息与所述第一设备信息之间存在关联关系。
  10. 根据权利要9所述的方法,其特征在于,所述关联关系包括位置关联关系,所述方法还包括:
    将所述医疗成像设备与成像对象的相对位置关系转换为所述医疗成像设备与虚拟人体模型的位置关联关系;其中,所述位置关联关系用于表征第一模型信息与第一设备信息之间的位置参数的关系。
  11. 根据权利要求10所述的方法,其特征在于,所述关联关系还包括视野关联关系,所述方法还包括:
    获取第一模型信息与第一设备信息之间的视野关联关系;其中,所述视野关联关系用于表征第一模型信息与第一设备信息之间的视野参数的关系。
  12. 根据权利要求9所述的方法,其特征在于,所述获取与用户操作指令对应的第一位置信息,包括:
    当所述第一位置信息为第一模型信息时,将所述虚拟人体模型显示在交互界面上,并在所述虚拟人体模型上显示与用户操作指令对应的第一模型信息。
  13. 根据权利要求12所述的方法,其特征在于,所述第一模型信息包括图形化标记。
  14. 根据权利要求13所述的方法,其特征在于,所述图形化标记基于用户操作指令执行选择、移动、缩小和放大中至少一项操作。
  15. 根据权利要9所述的方法,其特征在于,在继续获取不同于所述第一位置信息的第二位置信息之前,所述方法还包括:
    当所述第一位置信息为第一模型信息时,根据所述第一模型信息和所述关联关系,确定医疗成像设备对应的第一成像位置,并控制所述医疗成像设备移动到所述第一成像位置。
  16. 根据权利要9所述的方法,其特征在于,所述根据所述第一位置信息确定所述医疗成像设备对应的目标成像位置,包括:
    当所述第一位置信息为第一模型信息时,根据所述第一模型信息和所述关联关系,确定 所述医疗成像设备对应的目标成像位置,并控制所述医疗成像设备移动到目标成像位置。
  17. 根据权利要16所述的方法,其特征在于,在控制所述医疗成像设备移动到目标成像位置之后,还包括:
    控制所述医疗成像设备中的成像组件基于视野参数执行成像操作;其中,所述视野参数包括源像距、源物距和放大率中至少一种。
  18. 根据权利要求9所述的方法,其特征在于,所述基于所述第一位置信息,确定所述虚拟人体模型中与所述第一位置信息对应的人体内部图像,包括:
    当所述第一位置信息为第一设备信息时,根据所述第一设备信息和所述关联关系,确定虚拟人体模型对应的第一模型信息,并基于所述第一模型信息确定人体内部图像。
  19. 根据权利要求9所述的方法,其特征在于,所述根据所述第一位置信息确定所述医疗成像设备对应的目标成像位置,包括:
    当所述第一位置信息为第一设备信息时,将所述第一设备信息中的成像位置作为与所述医疗成像设备对应的目标成像位置。
  20. 根据权利要求8所述的方法,其特征在于,所述获取与成像对象对应的虚拟人体模型,包括:
    根据获取到的与所述成像对象对应的身高数据,选取与所述身高数据对应的虚拟人体模型;其中,所述虚拟人体模型包括人体体型模型和人体内部模型。
  21. 根据权利要20所述的方法,其特征在于,所述人体内部模型包括血管模型、器官模型、骨骼模型和肌肉模型中至少一种。
  22. 根据权利要求8所述的方法,其特征在于,所述医疗成像设备包括数字X射线摄影设备、C形臂X射线设备、乳腺机、计算机断层射线摄影设备、磁共振设备、正电子发射断层成像PET设备、正电子发射断层成像及计算机断层成像PET-CT设备、正电子发射断层成像及磁共振成像PET-MR设备或放射治疗成像RT设备。
  23. 一种图像显示方法,应用于血管造影设备中,其特征在于,所述图像显示方法包括:
    获取血管造影设备基于第一视野参数采集到的成像对象的第一血管图像,并显示所述第一血管图像;
    基于接收到的参数调整指令以及所述第一视野参数确定第二视野参数;其中,所述参数调整指令中包含缩放调整比例;
    基于所述第二视野参数确定第二血管图像,并将所述第二血管图像与所述第一血管图像进行同步显示。
  24. 根据权利要求23所述的方法,其特征在于,所述基于接收到的参数调整指令以及所述第一视野参数确定第二视野参数,包括:
    基于所述缩放调整比例和所述第一视野参数中的第一放大率,确定第二视野参数中的第 二放大率。
  25. 根据权利要求23所述的方法,其特征在于,所述基于接收到的参数调整指令以及所述第一视野参数确定第二视野参数,包括:
    基于所述缩放调整比例和所述第一视野参数中的第一视野范围,确定第二视野参数中的第二视野范围。
  26. 根据权利要求23所述的方法,其特征在于,所述成像对象包括血管和插入到血管中的导管,所述基于接收到的参数调整指令以及所述第一视野参数确定第二视野参数,包括:
    当所述缩放调整比例大于一时,获取所述第一血管图像中与所述导管对应的导管顶端图像,将所述导管顶端图像对应的视野中心位置和视野范围分别作为第二视野参数中的第二视野中心位置和第二视野范围。
  27. 根据权利要求23所述的方法,其特征在于,所述基于所述第二视野参数确定第二血管图像,包括:
    获取血管造影设备基于所述第二视野参数采集到的第二血管图像;或,
    当所述缩放调整比例大于一时,基于第二视野参数中的第二视野中心位置和第二视野范围确定所述第一血管图像中的参考血管图像,并基于第二视野参数中的第二放大率和所述参考血管图像确定第二血管图像。
  28. 根据权利要求23所述的方法,其特征在于,所述将所述第二血管图像与所述第一血管图像进行同步显示,包括:
    在所述第一血管图像对应的显示设备上,将所述第二血管图像在不同于所述第一血管图像的显示区域进行显示;或,
    将所述第二血管图像在不同于所述第一血管图像的显示设备上进行显示。
  29. 根据权利要求23所述的方法,其特征在于,所述将所述第二血管图像与所述第一血管图像进行同步显示,包括:
    当所述第一血管图像对应的第一视野参数发生变化时,获取血管造影设备基于变化后的第一视野参数采集到的更新第一血管图像;
    基于变化后的第一视野参数和所述参数调整指令确定更新第二视野参数,并将基于所述更新第二视野参数确定的更新第二血管图像与所述更新第一血管图像进行同步显示。
  30. 一种容积重建图像生成方法,其特征在于,包括:
    获取各扫描角度下的投影数据;
    基于初始容积坐标系和期望重建方向,构建目标容积坐标系;
    在所述目标容积坐标系下,根据所述期望重建方向对所述投影数据进行重建,生成目标容积重建图像。
  31. 根据权利要求30所述的方法,其特征在于,所述方法还包括:
    对所述投影数据进行容积重建,得到初始容积重建图像;
    基于所述初始容积重建图像确定所述初始容积坐标系。
  32. 根据权利要求31所述的方法,其特征在于,所述初始容积坐标系的第一坐标原点为初始容积重建图像所对应重建容积的第一纵轴所属边长的任一位置点;第一横轴在所述第一纵轴所属的平面内,以所述第一坐标原点为起点,且垂直于所述第一纵轴;第一竖轴以所述第一坐标原点为起点,垂直于所述第一横轴和所述第一纵轴所在平面。
  33. 根据权利要求30所述的方法,其特征在于,所述基于初始容积坐标系和期望重建方向,构建目标容积坐标系,包括:
    确定所述初始容积坐标系所属的重建容积;
    在所述重建容积内,基于所述期望重建方向,以垂直于所述期望重建方向确定第一当前平面;
    基于所述第一纵轴和所述第一竖轴构建第二当前平面,将所述第二当前平面与所述第一当前平面的交线中点作为目标容积坐标系的第二坐标原点,并将所述期望重建方向作为所述目标容积坐标系的第二竖轴;
    在所述第一当前平面内,将所述第二坐标原点所属边长作为目标容积坐标系的第二纵轴,并以所述第二坐标原点为起点,将垂直于所述第二纵轴和所述第二竖轴所在平面的向量作为第二横轴。
  34. 根据权利要求31所述的方法,其特征在于,所述在所述目标容积坐标系下,根据所述期望重建方向对所述投影数据进行重建,生成目标容积重建图像,包括:
    基于所述初始容积坐标系和所述目标容积坐标系的角度差,以及所述初始容积重建图像在所述初始容积坐标系的像素范围,确定所述目标容积坐标系下的像素范围;
    在所述目标容积坐标系下的像素范围内,根据所述期望重建方向对所述投影数据进行重建,生成所述目标容积重建图像。
  35. 根据权利要求30所述的方法,其特征在于,在所述根据所述期望重建方向对所述投影数据进行重建之前,还包括:
    对所述投影数据进行预处理。
  36. 根据权利要求30所述的方法,其特征在于,还包括:
    获取目标容积重建图像的图像渲染指令;
    基于所述图像渲染指令,对所述目标容积重建图像的像素点进行渲染,并将渲染后的所述目标容积重建图像进行显示。
  37. 一种容积重建图像生成装置,其特征在于,包括:
    投影数据获取模块,用于获取各扫描角度下的投影数据;
    目标容积坐标系生成模块,用于基于初始容积坐标系和期望重建方向,构建目标容积坐 标系;
    目标容积重建图像生成模块,用于在所述目标容积坐标系下,根据所述期望重建方向对所述投影数据进行重建,生成目标容积重建图像。
  38. 一种C形臂设备的动态透视系统,其特征在于,包括拍摄模块和显示模块;
    所述拍摄模块用于在一个拍摄周期内对主体进行拍摄,在所述拍摄过程中获取射线源在第一能量下照射所述主体的第一透视数据,以及获取所述射线源在不同于所述第一能量的第二能量下照射所述主体的第二透视数据;
    所述拍摄模块还用于在多个连续的拍摄周期对所述主体进行所述拍摄;
    所述显示模块用于根据所述多个连续拍摄周期的每个拍摄周期中获取的所述第一透视数据和所述第二透视数据,显示所述主体的动态图像。
  39. 一种C形臂设备的动态透视装置,其特征在于,所述装置包括至少一个处理器和至少一个存储设备,所述存储设备用于存储指令,当所述至少一个处理器执行所述指令时,实现如权利要求1~7中任一项所述的方法。
  40. 一种C形臂成像系统,其特征在于,包括射线源、探测器、存储器以及显示器,所述射线源包括球管、高压发生器以及高压控制模块;其中:
    所述高压控制模块,控制所述高压发生器在第一能量和第二能量之间往复切换;
    所述射线源,在所述高压发生器的驱动下,在一个拍摄周期内对主体发出射线,所述探测器获取所述主体在第一能量射线照射下的第一透视数据以及在第二能量射线照射下的第二透视数据;
    所述存储器,存储多个连续拍摄周期的每个拍摄周期中获取的所述第一透视数据和所述第二透视数据;
    所述存储器还存储有图像处理单元,所述图像处理单元对每个拍摄周期内的第一透视数据和第二透视数据减影处理以获得所述主体在每个拍摄周期内的图像进而得到在多个拍摄周期的动态图像;
    显示器,用于显示所述主体在多个拍摄周期的动态图像。
  41. 根据权利要求40所述的系统,其特征在于,所述C形臂成像系统包括DSA设备或移动C形臂设备。
  42. 一种医疗成像设备的成像定位装置,其特征在于,包括:
    虚拟人体模型获取模块,用于获取与成像对象对应的虚拟人体模型,并获取与用户操作指令对应的第一位置信息;
    人体内部图像显示模块,用于基于所述第一位置信息,确定所述虚拟人体模型中与所述第一位置信息对应的人体内部图像,将所述人体内部图像进行显示;
    第一目标成像位置确定模块,用于如果所述第一位置信息对应的所述人体内部图像与目 标拍摄位置对应,则根据所述第一位置信息确定所述医疗成像设备对应的目标成像位置;
    第二目标成像位置确定模块,用于如果所述第一位置信息对应的所述人体内部图像与目标拍摄位置不对应,则继续获取不同于所述第一位置信息的第二位置信息,并根据所述第二位置信息对应的人体内部图像确定与医疗成像设备对应的目标成像位置。
  43. 一种图像显示装置,其特征在于,包括:
    第一血管图像显示模块,用于获取血管造影设备基于第一视野参数采集到的成像对象的第一血管图像,并显示所述第一血管图像;
    第二视野参数确定模块,用于基于接收到的参数调整指令和所述第一视野参数确定第二视野参数;其中,所述参数调整指令中包含缩放调整比例;
    第二血管图像显示模块,用于基于所述第二视野参数确定第二血管图像,并将所述第二血管图像与所述第一血管图像进行同步显示。
  44. 一种血管造影设备,其特征在于,所述血管造影设备包括成像组件、至少一个显示设备和控制器;
    其中,所述成像组件,用于基于第一视野参数采集第一血管图像;
    所述显示设备,用于显示第一血管图像和第二血管图像;
    所述控制器包括一个或多个处理器以及存储有一个或多个程序的存储器,当所述一个或多个程序被所述一个或多个处理器执行时,使得所述一个或多个处理器实现如权利要求23-29任一所述的图像显示方法。
  45. 一种容积重建图像生成系统,其特征在于,包括:控制设备和图像采集设备;
    其中,所述控制设备包括存储器、处理器及存储在存储器上并可在处理器上运行的计算机程序,所述处理器执行所述计算机程序时实现如权利要求30-37中任一项所述的容积重建图像生成方法;
    所述图像采集设备,用于对扫描对象在所述各扫描角度下进行扫描,以得到各所述扫描角度下的投影数据。
  46. 一种计算机可读存储介质,所述存储介质存储计算机指令,当计算机读取所述存储介质中的所述计算机指令后,所述计算机执行如权利要求1-7中任一项所述的方法。
  47. 一种医疗成像设备,其特征在于,所述医疗成像设备包括:
    一个或多个处理器;
    存储器,用于存储一个或多个程序;
    当所述一个或多个程序被所述一个或多个处理器执行,使得所述一个或多个处理器实现如权利要求8-22中任一所述的医疗成像设备的成像定位方法。
  48. 一种包含计算机可执行指令的存储介质,其特征在于,所述计算机可执行指令在由计算机处理器执行时用于执行如权利要求8-22中任一所述的医疗成像设备的成像定位方法。
  49. 一种包含计算机可执行指令的存储介质,其特征在于,所述计算机可执行指令在由计算机处理器执行时用于执行如权利要23-29中任一所述的图像显示方法。
  50. 一种包含计算机可执行指令的存储介质,其特征在于,所述计算机可执行指令在由计算机处理器执行时实现如权利要求30-37中任一项所述的容积重建图像生成方法。
PCT/CN2021/118006 2020-09-11 2021-09-13 C形臂设备的动态透视方法、装置及系统 WO2022053049A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
EP21866101.5A EP4201331A4 (en) 2020-09-11 2021-09-13 METHOD, APPARATUS AND DYNAMIC PERSPECTIVE SYSTEM FOR C-SHAPED ARM EQUIPMENT
US18/182,286 US20230230243A1 (en) 2020-09-11 2023-03-10 Methods, devices, and systems for dynamic fluoroscopy of c-shaped arm devices

Applications Claiming Priority (8)

Application Number Priority Date Filing Date Title
CN202010957835.1A CN111973209A (zh) 2020-09-11 2020-09-11 一种c形臂设备的动态透视方法和系统
CN202010957835.1 2020-09-11
CN202011019324.1 2020-09-24
CN202011017481.9A CN112057094A (zh) 2020-09-24 2020-09-24 一种图像显示方法、装置、血管造影设备及存储介质
CN202011019279.X 2020-09-24
CN202011019279.XA CN112150600B (zh) 2020-09-24 2020-09-24 一种容积重建图像生成方法、装置、系统及存储介质
CN202011017481.9 2020-09-24
CN202011019324.1A CN112150543A (zh) 2020-09-24 2020-09-24 医疗成像设备的成像定位方法、装置、设备及存储介质

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US18/182,286 Continuation US20230230243A1 (en) 2020-09-11 2023-03-10 Methods, devices, and systems for dynamic fluoroscopy of c-shaped arm devices

Publications (1)

Publication Number Publication Date
WO2022053049A1 true WO2022053049A1 (zh) 2022-03-17

Family

ID=80632624

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/118006 WO2022053049A1 (zh) 2020-09-11 2021-09-13 C形臂设备的动态透视方法、装置及系统

Country Status (3)

Country Link
US (1) US20230230243A1 (zh)
EP (1) EP4201331A4 (zh)
WO (1) WO2022053049A1 (zh)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2022147491A (ja) * 2021-03-23 2022-10-06 コニカミノルタ株式会社 動態解析装置及びプログラム

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040215071A1 (en) * 2003-04-25 2004-10-28 Frank Kevin J. Method and apparatus for performing 2D to 3D registration
CN101126725A (zh) * 2007-09-24 2008-02-20 舒嘉 采用x射线容积摄影实现图像重建的方法
CN102068268A (zh) * 2010-12-17 2011-05-25 陈建锋 一种利用多能量x射线复合投影数字合成成像的方法及其装置
CN107146266A (zh) * 2017-05-05 2017-09-08 上海联影医疗科技有限公司 一种图像重建方法、装置、医学成像系统及存储媒介
CN111261265A (zh) * 2020-01-14 2020-06-09 于金明 基于虚拟智能医疗平台的医学影像系统
US20200218922A1 (en) * 2018-12-17 2020-07-09 Shanghai United Imaging Healthcare Co., Ltd. Systems and methods for determining a region of interest of a subject
CN111973209A (zh) * 2020-09-11 2020-11-24 上海联影医疗科技股份有限公司 一种c形臂设备的动态透视方法和系统
CN112057094A (zh) * 2020-09-24 2020-12-11 上海联影医疗科技股份有限公司 一种图像显示方法、装置、血管造影设备及存储介质
CN112150543A (zh) * 2020-09-24 2020-12-29 上海联影医疗科技股份有限公司 医疗成像设备的成像定位方法、装置、设备及存储介质
CN112150600A (zh) * 2020-09-24 2020-12-29 上海联影医疗科技股份有限公司 一种容积重建图像生成方法、装置、系统及存储介质

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3071108A1 (en) * 2013-11-18 2016-09-28 Koninklijke Philips N.V. One or more two dimensional (2d) planning projection images based on three dimensional (3d) pre-scan image data

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040215071A1 (en) * 2003-04-25 2004-10-28 Frank Kevin J. Method and apparatus for performing 2D to 3D registration
CN101126725A (zh) * 2007-09-24 2008-02-20 舒嘉 采用x射线容积摄影实现图像重建的方法
CN102068268A (zh) * 2010-12-17 2011-05-25 陈建锋 一种利用多能量x射线复合投影数字合成成像的方法及其装置
CN107146266A (zh) * 2017-05-05 2017-09-08 上海联影医疗科技有限公司 一种图像重建方法、装置、医学成像系统及存储媒介
US20200218922A1 (en) * 2018-12-17 2020-07-09 Shanghai United Imaging Healthcare Co., Ltd. Systems and methods for determining a region of interest of a subject
CN111261265A (zh) * 2020-01-14 2020-06-09 于金明 基于虚拟智能医疗平台的医学影像系统
CN111973209A (zh) * 2020-09-11 2020-11-24 上海联影医疗科技股份有限公司 一种c形臂设备的动态透视方法和系统
CN112057094A (zh) * 2020-09-24 2020-12-11 上海联影医疗科技股份有限公司 一种图像显示方法、装置、血管造影设备及存储介质
CN112150543A (zh) * 2020-09-24 2020-12-29 上海联影医疗科技股份有限公司 医疗成像设备的成像定位方法、装置、设备及存储介质
CN112150600A (zh) * 2020-09-24 2020-12-29 上海联影医疗科技股份有限公司 一种容积重建图像生成方法、装置、系统及存储介质

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP4201331A4 *

Also Published As

Publication number Publication date
EP4201331A1 (en) 2023-06-28
EP4201331A4 (en) 2024-01-10
US20230230243A1 (en) 2023-07-20

Similar Documents

Publication Publication Date Title
JP5491914B2 (ja) 画像表示装置およびx線診断装置
JP4859446B2 (ja) 回転血管撮影のための血管撮影x線診断装置
JP6127033B2 (ja) 放射線画像撮影システム、画像処理装置、及び画像処理プログラム。
JP6127032B2 (ja) 放射線画像撮影システム、画像処理装置、画像処理方法、及び画像処理プログラム
US10631810B2 (en) Image processing device, radiation imaging system, image processing method, and image processing program
US9836861B2 (en) Tomography apparatus and method of reconstructing tomography image
US10105118B2 (en) Apparatus and method of processing medical image
CN112150543A (zh) 医疗成像设备的成像定位方法、装置、设备及存储介质
WO2022053049A1 (zh) C形臂设备的动态透视方法、装置及系统
CN111343922B (zh) 放射线断层图像处理装置和放射线断层摄影装置
JP2016168120A (ja) 医用画像処理装置および医用画像処理装置における画像表示制御方法
US20230200759A1 (en) Medical image diagnosis apparatus and scanning-range setting method
US11832984B2 (en) System and method for motion guidance during medical image acquisition
US12108993B2 (en) Methods and system for guided device insertion during medical imaging
JP6956514B2 (ja) X線ct装置及び医用情報管理装置
JP6925786B2 (ja) X線ct装置
JP6824641B2 (ja) X線ct装置
JP5455290B2 (ja) 医用画像処理装置及び医用画像診断装置
CN111973209A (zh) 一种c形臂设备的动态透视方法和系统
JP5591555B2 (ja) X線診断装置、画像処理装置及びx線診断システム
JP6676359B2 (ja) 制御装置、制御システム、制御方法、及びプログラム
US12100174B2 (en) Methods and system for dynamically annotating medical images
US20240127450A1 (en) Medical image processing apparatus and non-transitory computer readable medium
JP6855173B2 (ja) X線ct装置
KR20160072004A (ko) 단층 촬영 장치 및 그에 따른 단층 영상 복원 방법

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21866101

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2021866101

Country of ref document: EP

Effective date: 20230320

NENP Non-entry into the national phase

Ref country code: DE