WO2018201437A1 - 一种图像分割方法及系统 - Google Patents

一种图像分割方法及系统 Download PDF

Info

Publication number
WO2018201437A1
WO2018201437A1 PCT/CN2017/083184 CN2017083184W WO2018201437A1 WO 2018201437 A1 WO2018201437 A1 WO 2018201437A1 CN 2017083184 W CN2017083184 W CN 2017083184W WO 2018201437 A1 WO2018201437 A1 WO 2018201437A1
Authority
WO
WIPO (PCT)
Prior art keywords
model
image
edge
point
points
Prior art date
Application number
PCT/CN2017/083184
Other languages
English (en)
French (fr)
Inventor
郭延恩
沈建华
王晓东
Original Assignee
上海联影医疗科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 上海联影医疗科技有限公司 filed Critical 上海联影医疗科技有限公司
Priority to EP17908558.4A priority Critical patent/EP3608872B1/en
Priority to PCT/CN2017/083184 priority patent/WO2018201437A1/zh
Publication of WO2018201437A1 publication Critical patent/WO2018201437A1/zh
Priority to US16/674,172 priority patent/US11170509B2/en
Priority to US17/454,053 priority patent/US11935246B2/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/149Segmentation; Edge detection involving deformable models, e.g. active contour models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/003Reconstruction from projections, e.g. tomography
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/02Affine transformations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/12Edge-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20048Transform domain processing
    • G06T2207/20061Hough transform
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20076Probabilistic image processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20112Image segmentation details
    • G06T2207/20121Active appearance model [AAM]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20112Image segmentation details
    • G06T2207/20124Active shape model [ASM]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30008Bone
    • G06T2207/30012Spine; Backbone
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30016Brain
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30048Heart; Cardiac
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30056Liver; Hepatic
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30084Kidney; Renal

Definitions

  • the present application relates to an image processing method, and in particular, to a probability-based image matching segmentation method and system.
  • cardiovascular disease With the improvement of human living standards and the prolongation of life expectancy, cardiovascular disease has become the number one cause of death in humans. Therefore, early diagnosis of cardiovascular disease can effectively reduce the mortality rate. Understanding the imaging findings and functional data of cardiac structure is an important prerequisite for the correct diagnosis of heart disease.
  • the development of CT technology has significantly improved the time resolution, reduced the heartbeat artifacts, and showed good application in displaying the fine structure of the heart. potential.
  • Image segmentation technology is a key technology in image analysis, which plays an increasingly important role in imaging medicine.
  • Image segmentation is an indispensable means to extract quantitative information of special tissues in image images, and it is also a pre-processing step and premise for visualization. Segmented images are being used in a variety of applications, such as quantitative analysis of tissue volume, diagnosis, localization of diseased tissue, learning of anatomical structures, treatment planning, local body effect correction of functional imaging data, and computer-guided surgery.
  • the variable model is a more common practice in the field of cardiac chamber segmentation.
  • the cardiac chamber model is derived on average based on image data corresponding to multiple sets of clinical heart chamber models.
  • a matching image is obtained by matching the model with the image.
  • a method includes: acquiring image data; reconstructing an image based on the image data, wherein the image includes one or more first edges, the first edge having a plurality of points; acquiring a model, wherein the model includes the One or more second edges corresponding to the one or more first edges; matching the model to the reconstructed image; and adjusting one or more second edges of the model based on the one or more first edges .
  • the image data includes brain images, skull images, chest images, heart images, breasts Image, abdominal image, kidney image, liver image, pelvic image, perineal image, limb image, spine image or vertebra image.
  • acquiring the model includes acquiring a plurality of reference models; registering the acquired plurality of reference models; determining a plurality of control points on the plurality of reference models after registration; based on the plurality of reference models A plurality of control points obtain a control point of the model; and generate the model according to a control point of the model.
  • the method further includes generating a correlation factor for the control point of the model based on a relationship of the control point of the model to the one or more second edges in the model.
  • adjusting one or more second edges of the model includes determining a reference point on the second edge; determining a target point corresponding to the reference point; and adjusting a second edge of the model based on the target point .
  • determining a target point corresponding to the reference point comprises determining a normal of the reference point; acquiring a step size and a search range; determining one or more candidate points along the normal according to the step size and the search range; Obtaining a first classifier; determining, according to the first classifier, a probability that the one or more candidate points correspond to the first edge; and based on a probability that the one or more candidate points correspond to the first edge, Determine the target point.
  • determining a normal of the reference point comprises determining one or more polygon meshes adjacent to the reference point; determining one or more normals corresponding to the one or more polygon meshes; One or more normals that determine the normal to the reference point.
  • determining a normal of the reference point comprises determining one or more polygon meshes adjacent to the reference point; determining one or more normals corresponding to the one or more polygon meshes; One or more normals that determine the normal to the reference point.
  • the matching model and the reconstructed image comprise acquiring a second classifier; performing a weighted generalized Hough transform according to the second classifier; and matching the model and the image according to the result of the weighted generalized Hough transform.
  • acquiring the first classifier includes acquiring a point classifier that classifies a plurality of points of the first edge according to image features related to a degree of sharpness and a location; acquiring a point classification a plurality of points after the classification, wherein at least a part of the points classified by the point classifier are located within a certain range of the first edge; and determining a plurality of points after the classification of the point classifier within a certain range of the first edge is a positive sample Determining a plurality of points after the point classifier outside the certain range of the first edge is a negative sample; classifying the positive sample and the negative sample; and obtaining the trained first classifier according to the classified positive sample and the negative sample .
  • acquiring the second classifier includes acquiring a plurality of points of the model, wherein at least a portion of the plurality of points are within a certain range of the second edge; determining that the points within a certain range of the second edge are positive samples; determining the second Edge one The points outside the fixed range are negative samples; the positive and negative samples are classified according to the degree of sharpness and the position; and the second classifier after training is obtained according to the classified positive and negative samples.
  • a system includes a memory configured to store data and instructions and a processor in communication with the memory.
  • the processor When executing an instruction in the memory, the processor is configured to: acquire image data; reconstruct an image based on the image data, wherein the image includes one or more first edges, the first edge having a plurality of points Obtaining a model, wherein the model includes one or more second edges corresponding to the one or more first edges; matching the model with the reconstructed image; and according to the one or more first edges , adjusting one or more second edges of the model.
  • a permanent computer readable medium with a computer program product includes a plurality of instructions.
  • the plurality of instructions are configured to: acquire image data; reconstruct an image based on the image data, wherein the image includes one or more first edges, the first edge having a plurality of points; acquiring a model , wherein the model includes one or more second edges corresponding to the one or more first edges; matching the model with the reconstructed image; and adjusting the model based on the one or more first edges One or more second edges.
  • FIG. 1 is a schematic diagram of an application scenario of an example control and processing system according to some embodiments of the present application
  • FIG. 2 is a schematic diagram of an example system configuration of a processing device shown in accordance with some embodiments of the present application;
  • FIG. 3 is a schematic diagram of an example mobile device for implementing some of the specific systems of the present application, in accordance with some embodiments of the present application;
  • FIG. 4 is a schematic diagram of an example processing device shown in accordance with some embodiments of the present application.
  • FIG. 5 is an exemplary flow diagram of an implementation processing device shown in accordance with some embodiments of the present application.
  • FIG. 6 is a schematic diagram of an example model building block shown in accordance with some embodiments of the present application.
  • FIG. 7 is an exemplary flow diagram of constructing an average model, shown in accordance with some embodiments of the present application.
  • FIG. 8 is a schematic diagram of an example training module shown in accordance with some embodiments of the present application.
  • FIG. 9 is an exemplary flow chart of a training classifier shown in accordance with some embodiments of the present application.
  • FIG. 10 is a schematic structural diagram of an example model matching module according to some embodiments of the present application.
  • FIG. 11 is an exemplary flow diagram of a matching average model and a reconstructed image, shown in accordance with some embodiments of the present application;
  • FIG. 12 is a schematic structural diagram of an example adjustment module according to some embodiments of the present application.
  • FIG. 13 is an exemplary flow diagram of an adjustment model shown in accordance with some embodiments of the present application.
  • FIG. 14 is an exemplary flow chart for determining a target point, shown in accordance with some embodiments of the present application.
  • 15 is an exemplary flow diagram for determining an average model edge point normal, as shown in some embodiments of the present application.
  • 16 is an exemplary flow diagram of an average model edge point transform, shown in accordance with some embodiments of the present application.
  • Figure 17 is a schematic diagram of an image sharpness degree of the present application.
  • 21 is an embodiment of an association factor based mesh model of the present application.
  • 22 is an exemplary image edge of an image classified based on a degree of sharpness
  • Figure 23 is a diagram showing an example of a model classified based on the degree of sharpness
  • FIG. 24 is an embodiment of an image probability map obtained by the classifier according to the embodiment.
  • 25 is an exemplary diagram of an average mesh model and image matching after Hough transform
  • Figure 26 is a diagram showing an example of image chamber segmentation results of an exact match after adjustment
  • 27A is an image segmentation diagram that is not divided based on an association factor
  • Fig. 27B is an image segmentation diagram based on the division factor.
  • the present disclosure describes a process of image segmentation, which matches the edges of the image based on the probability, and transforms the edge points of the model based on the mesh model of the correlation factor to achieve an accurate matching or segmentation effect.
  • the matching of the medical images may include seeking a spatial transformation or a series of spatial transformations for a medical image such that it is spatially consistent with corresponding points on the model. This agreement can include the same anatomical point on the human body having the same spatial location on the matched image and model.
  • the result of the match can match all anatomical points on the image and/or all diagnostically significant anatomical points and points of interest to the surgery.
  • FIG. 1 is a schematic diagram of an application scenario of an example control and processing system according to some embodiments of the present application.
  • control and processing system 100 includes an imaging device 110, a database 120, and a processing device 130.
  • the imaging device 110 can generate an image by scanning one target object.
  • the image can be a variety of medical images.
  • a head image a chest image, an abdominal image, a pelvic image, a perineal image, a limb image, a spine image, a vertebra image, and the like.
  • the head image may include a brain image, a skull image, and the like.
  • the chest image may include an entire chest image, a heart image, a breast image, and the like.
  • the abdominal image may include an entire abdominal image, a kidney image, a liver image, and the like.
  • Cardiac images may include, but are not limited to, an all-round digital heart map, a digitized cardiac tomogram, a cardiac phase contrast map, an X-ray image (CR) map, a multimodal image, and the like.
  • the medical image may be a two-dimensional image or a three-dimensional image.
  • the format of the medical image may include a JPEG format, a TIFF format, a GIF format, an FPX format, and the like.
  • the medical image may be stored in the database 120 or may be transmitted to the processing device 130 for image processing.
  • the present application will be described by taking a heart image as an example, but those skilled in the art will appreciate that the method of the present application can also be applied to other images.
  • the database 120 can store image and/or image related information.
  • the image and image related information may be provided by the imaging device 110 and the processing device 130, or may be obtained from outside the system 100, for example, user input information, Get information from the network, etc.
  • the image related information may include algorithms for processing images, samples, models, parameters, real-time data during processing, and the like.
  • Database 120 can be a hierarchical database, a networked database, or a relational database.
  • Database 120 can be a local database or a remote database.
  • the database 120 or other storage devices within the system may digitize the information and store it using storage devices that operate in an electrical, optical, or magnetic manner.
  • database 120 or other storage devices within the system may be devices that store information using electrical energy, such as random access memory (RAM), read only memory (ROM), and the like.
  • Random access memory may include, but is not limited to, decimal cells, select transistors, delay line memories, Williams tubes, dynamic random access memory (DRAM), static random access memory (SRAM), thyristor random access memory (T-RAM), zero capacitance A combination of one or more of a random access memory (Z-RAM) or the like.
  • Read-only memory includes, but is not limited to, bubble memory, magnetic button memory, thin film memory, magnetic plate line memory, magnetic core memory, drum memory, optical disk drive, hard disk, magnetic tape, non-volatile memory (NVRAM), phase change memory Reluctance random storage memory, ferroelectric random storage memory, non-volatile static random access memory, programmable read-only memory, shielded heap read memory, floating connection gate random access memory, nano random access memory, track memory, variable A combination of one or more of a resistive memory, a programmable metallization unit, and the like.
  • bubble memory includes, but is not limited to, bubble memory, magnetic button memory, thin film memory, magnetic plate line memory, magnetic core memory, drum memory, optical disk drive, hard disk, magnetic tape, non-volatile memory (NVRAM), phase change memory Reluctance random storage memory, ferroelectric random storage memory, non-volatile static random access memory, programmable read-only memory, shielded heap read memory, floating connection gate random access memory, nano random access memory, track memory,
  • database 120 or other storage devices within the system may be devices that store information using magnetic energy, such as hard disks, floppy disks, magnetic tapes, magnetic core memories, magnetic bubble memories, USB flash drives, memory, and the like.
  • database 120 or other storage devices within the system may be devices that optically store information, such as CDs, DVDs, and the like.
  • database 120 may be a device that stores information using magneto-optical means, such as a magneto-optical disk or the like.
  • the access mode of the database 120 or other storage devices in the system may be a combination of one or more of random storage, serial access storage, read-only storage, and the like.
  • Database 120 or other storage devices within the system may be non-persistent or permanent memory. The storage devices mentioned above are just a few examples, and the storage devices that the database 120 can use are not limited thereto.
  • the database 120 may be part of the processing device 130, may be part of the imaging device 110, or may exist independently of the processing device 130 and the imaging device 110. In some embodiments, database 120 can be coupled to other modules or devices in control and processing system 100 via network 150.
  • the connection manner may include a wired connection, a wireless connection, or a combination of both.
  • the processing device 130 may acquire image data from the imaging device 110, and may also acquire image data from the database 120. Processing device 130 can perform a variety of processing on the acquired image. The processing may include grayscale histogram processing, normalization processing, geometric transformation, spatial transformation, image smoothing processing, image enhancement processing, image segmentation processing, image transformation processing, image restoration, image compression, image feature extraction, and the like. Processing device 130 may store the processed image data to database 120 or may be transmitted to devices other than control and processing system 100.
  • processing device 130 may include one or more processors, memory, and the like.
  • processing device 130 may include a central processing unit (CPU), an application specific integrated circuit (ASIC), a dedicated instruction set processor (ASIP), an image processor (GPU), a physical computing processor (PPU), a digital signal processor ( A combination of one or more of DSP), Field Programmable Gate Array (FPGA), Programmable Logic Device (PLD), controller, micro control unit, processor, microprocessor, advanced RISC machine processor, and the like.
  • CPU central processing unit
  • ASIC application specific integrated circuit
  • ASIP dedicated instruction set processor
  • GPU graphics processing
  • PPU physical computing processor
  • DSP Field Programmable Gate Array
  • PLD Programmable Logic Device
  • control and processing system 100 may also include a terminal device 140.
  • the terminal device can perform information interaction with the imaging device 110, the database 120, and the processing device 130.
  • the terminal device 140 may acquire processed image data from the processing device 130.
  • terminal device 140 may acquire image data from imaging device 110 and transmit the image data to processing device 130 for image processing.
  • the terminal device 140 may include one or more input devices, a control panel, and the like.
  • the input device may include a keyboard, a touch screen, a mouse, a voice input device, a scanning device, an information recognition device (such as a human eye recognition system, a fingerprint recognition system, a brain monitoring system, etc.), a remote controller, and the like.
  • Control and processing system 100 can be coupled to network 150.
  • the network 150 can be a wireless network, a mobile network, a limited network, or other connection.
  • the wireless network may include Bluetooth ⁇ R , WLAN, Wi-Fi, WiMax, and the like.
  • the mobile network may include a 2G signal, a 3G signal, a 4G signal, and the like.
  • a wired network may include a local area network (LAN), a wide area network (WAN), a proprietary network, and the like.
  • the database 120 and processing device 130 in the control and processing system 100 can execute operational instructions through the cloud computing platform.
  • the cloud computing platform may include a storage-based cloud platform based on data storage, a computing-based cloud platform based on data processing, and an integrated cloud computing platform that combines computing and data storage processing. For example, some of the image data generated by control and processing system 100 may be calculated or stored by a cloud computing platform.
  • control and processing system 100 is merely for convenience of description and is not intended to limit the scope of the embodiments.
  • the processing device 130 can include a data bus 210, a processor 220, a read only memory (ROM) 230, a random access memory (RAM) 240, a communication port 250, and an input/output port 260.
  • ROM read only memory
  • RAM random access memory
  • the connection manner between the hardware in the processing device 130 may be wired, wireless, or a combination of the two. Any piece of hardware can be local, remote, or a combination of both.
  • Data bus 210 can be used to transmit data information.
  • each hardware within the processing device 130 Data can be transferred between the data bus 210.
  • processor 220 can transmit data over the data bus 210 to other hardware, such as memory or input/output port 260.
  • the data may be real data, or may be instruction code, status information or control information.
  • data bus 210 can be an industry standard (ISA) bus, an extended industry standard (EISA) bus, a video electronics standard (VESA) bus, an external component interconnect standard (PCI) bus, and the like.
  • ISA industry standard
  • EISA extended industry standard
  • VESA video electronics standard
  • PCI external component interconnect standard
  • Processor 220 can be used for logic operations, data processing, and instruction generation.
  • the processor 220 can obtain data/instructions from an internal memory, which can include read only memory (ROM), random access memory (RAM), cache (Cache) (not shown in the figure) Out) and so on.
  • processor 220 can include multiple sub-processors that can be used to implement different functions of the system.
  • the read only memory 230 is used for the power-on self-test of the processing device 130, the initialization of each functional module in the processing device 130, the driver of the basic input/output of the processing device 130, and the like.
  • the read only memory can include a programmable read only memory (PROM), a programmable erasable read only memory (EPROM), and the like.
  • the random access memory 240 is used to store an operating system, various applications, data, and the like.
  • random access memory 240 can include static random access memory (SRAM), dynamic random access memory (DRAM), and the like.
  • the communication port 250 is used to connect the operating system with an external network to implement communication between them.
  • communication port 250 can include an FTP port, an HTTP port, or a DNS port, and the like.
  • the input/output port 260 is used for data, information exchange, and control between an external device or circuit and the processor 210.
  • input/output port 260 can include a USB port, a PCI port, an IDE port, and the like.
  • the hard disk 270 is used to store information and data generated by the processing device 130 or received from outside the processing device 130.
  • the hard disk 270 may include a mechanical hard disk (HDD), a solid state hard disk (SSD), or a hybrid hard disk (HHD).
  • Display 280 is used to present information and data generated by system 130 to the user.
  • display 280 can include a physical display such as a display with speakers, an LCD display, an LED display, an OLED display, an electronic ink display (E-Ink), and the like.
  • mobile device 350 can include a terminal device 150.
  • a user may receive or transmit information related to the control and processing system 100 via the mobile device 350.
  • Mobile device 350 can include one or more of a smart phone, a personal digital assistant (PDA), a tablet, a handheld game console, smart glasses, a smart watch, a wearable device, a virtual reality device, or a display enhancement device.
  • PDA personal digital assistant
  • mobile device 350 can include one or more central processing units (CPUs) 358, One or more image processors (GPUs) 356, a display 354, a memory 362, a communication platform 352, a memory 368, and one or more input/output devices 360. Further, the mobile device 350 can also include a system bus, a controller, and the like. As shown in FIG. 3, the CPU may download a mobile device operating system (eg, iOS, Android, Windows Phone, etc.) 364 and one or more applications 366 from memory module 368 into memory 362. The one or more applications 366 can include a web page or other mobile application (App) for receiving and communicating information related to the control and processing system 100. The user may obtain or provide information through the input/output device 360, which may be further transmitted to the control and processing system 100 and/or device units within the system.
  • CPUs central processing units
  • GPUs image processors
  • a computer hardware platform may be utilized as a hardware platform for one or more components (e.g., control and processing system 100 and other portions thereof), implementing various modules, units, and functions thereof.
  • the hardware components, operating systems, and programming languages are inherently traditional, and those skilled in the art are likely to adapt these techniques to heart image model building and edge segmentation.
  • a computer with a user interface can be used as a personal computer (PC), other workstation or terminal device, and a properly programmed computer can also act as a server. Since the structure, programming, and general operation of the computer device used in the present application should be familiar to those skilled in the art, no specific explanation will be made for other drawings.
  • the processing device 130 can include an acquisition module 410, an image reconstruction module 420, a storage module 430, a model construction module 430, a training module 450, a matching module 460, and an adjustment module 470.
  • the connection manner between modules in the processing device 130 may be wired, wireless, or a combination of the two. Any module can be local, remote, or a combination of both.
  • the storage module 430 can be used to store image data or information, and its function can be realized by a combination of one or more of the hard disk 270, the read only memory 230, the random access memory 240, and the like in FIG.
  • the storage module 430 can store information of other modules in the processing device 130 or modules or devices outside of the processing device 130.
  • the information stored by the storage module 430 may include scan data of the imaging device 110, control commands or parameter information input by the user, intermediate data generated by the processing portion in the processing device 130, or complete data information, and the like.
  • the storage module 430 can transmit the stored information to the processing portion for image processing.
  • the storage module 430 can store information generated by the processing portion, such as real-time computing data.
  • the storage module 430 may include, but is not limited to, various types of storage devices such as a solid state hard disk, a mechanical hard disk, a USB flash memory, an SD memory card, an optical disk, a random access memory (RAM), or a read only memory (ROM).
  • the storage module 430 may be a storage device inside the system or may be outside the system.
  • the obtaining module 410 can be used to acquire image data collected by the imaging device 110, image data stored in the database 120, or data external to the control and processing system 100, the function of which can be implemented by the processor 220 in FIG.
  • the image data may include image data acquired by the imaging device 110, algorithms for processing the images, samples, models, parameters, real-time data during processing, and the like.
  • the acquisition module 410 can send the acquired image data or information to the image reconstruction module 420 for processing.
  • the obtaining module 410 may send the acquired algorithm, parameters, and the like of the processed image to the model building module 440.
  • the acquisition module 410 can send the acquired image data or information to the storage module 370 for storage.
  • the obtaining module 410 may send the acquired samples, parameters, models, real-time data, and the like to the training module 450, the matching module 460, or the adjustment module 470.
  • the acquisition module 410 can receive a data acquisition instruction from the processor 220 and complete a corresponding data acquisition operation.
  • the acquisition module 410 can preprocess the image data or information after it has been acquired.
  • the image reconstruction module 420 can be used to construct a medical image, the functionality of which can be implemented by the processor 220 of FIG.
  • image reconstruction module 420 can retrieve image data or information from acquisition module 410 or storage module 430 and construct the medical image from the image data or information.
  • the medical image may be a three-dimensional medical image of a human body.
  • the image data may include scan data at different times, different locations, and different angles. Based on the scan data, the image reconstruction module 420 can calculate the characteristics or state of the corresponding part of the human body, such as the absorption capacity of the corresponding part of the human body, the density of the tissue corresponding to the human body, and the like, thereby constructing the three-dimensional medical image of the human body.
  • the human body three-dimensional medical image may be displayed through the display 280 or stored by the storage module 430. In some embodiments, the human body three-dimensional medical image may also be sent to the model building module 440 for further processing as the reconstructed image to be processed.
  • the model building module 440 can be used to build a three-dimensional average model of the target object.
  • the target object may be a heart
  • the three-dimensional average model may be a heart chamber three-dimensional average mesh model constructed based on multiple sets of reference models.
  • the model building module 440 can obtain a reference model of at least one heart chamber and information related to the reference model by means of the acquisition module 410, the storage module 430, or a user input.
  • the information related to the reference model may include the size of the image, the pixels, the spatial position of the pixels, and the like.
  • the model building module 440 may pre-register the reference model according to the acquired reference model of the at least one heart chamber and the information related to the reference model, such that the directions, proportions, and the like of all the reference models are consistent.
  • the pre-processed image may further label the edge of the chamber by manual or automatic labeling by the processor to divide the cardiac reference model A plurality of sub-cardiac chambers are constructed, and a heart chamber average mesh model is constructed based on the edge point data of each chamber.
  • the model building module 440 may send the constructed heart chamber average mesh model to the storage module 430 for storage, or may send it to the training module 450 or the matching module 460 for further operations.
  • the model building module 440 can also determine the relationship between the various chambers on the average model based on the plurality of sets of reference model data. For example, model building module 440 can construct a correlation factor matrix that can represent the impact of each chamber on one or more edge data points. By constructing a correlation factor matrix, chamber boundary separation can be improved. The model building module 440 may send the constructed correlation factor matrix to the storage module 430 for storage, or may send it to the matching module 460 or the adjustment module 470 for operation processing.
  • Training module 450 can be used to train the classifier.
  • the training module 450 can divide the possible edge points into different chamber categories.
  • the training module 450 can divide a range of data points near the edge of the reference model into six chamber categories, left ventricle, left atrium, right ventricle, right atrium, left myocardium, or aorta, respectively.
  • the training module 450 can divide a range of data points near the edge of the reference model to the left ventricular edge, the left atrium sharp edge, the left atrium non-sharp edge, the right ventricular sharp edge, and the right ventricle, respectively, based on the degree of change in the edge of the chamber.
  • the training module 450 can obtain a reference model of at least one heart chamber and information related to the reference model through the storage module 430, the model building module 440, or a user input.
  • the information related to the reference model may include edge point data or the like of each chamber in the reference model.
  • the training module 450 can divide points near the edge of the chamber into positive and negative samples based on the distance from the point near the edge of the chamber to the edge of the chamber.
  • the positive sample can include data points within a certain threshold range from the edge of the chamber
  • the negative samples can include data points that are further from the edge and other random locations in the space.
  • the training module 450 can train points near the edge of the chamber on the reference model or average model based on positive and negative sample points and obtain one or more classifiers.
  • the training module 450 can acquire a point classifier.
  • the point classifier classifies a plurality of points of the first edge according to image features.
  • the image features can be related to the degree of sharpness and location.
  • the training module 450 can acquire the first classifier.
  • the first classifier can be associated with the point classifier.
  • the first classifier may use a plurality of points classified by a point classifier within a certain range of the first edge as a positive sample according to a plurality of points classified by the point classifier, and the first edge is determined. A plurality of points after the point classifier outside the range are classified as negative samples. Then, the positive sample and the negative sample are classified, and the trained first classifier is obtained according to the classified positive sample and negative sample.
  • the training module 450 can utilize a Probabilistic Boosting-Tree (PBT) training classifier.
  • the PBT may include a two-stage PBT algorithm or multiple levels PBT algorithm.
  • the training module 450 may send the trained classifier to the storage module 430 for storage, or may send it to the adjustment module 470 for arithmetic processing.
  • the matching module 460 can be configured to match the image to be processed with the average model established by the model building module 440 to construct a three-dimensional mesh model corresponding to the image to be processed.
  • the image to be processed may be acquired from the image reconstruction module 420 or the storage module 430.
  • the matching module 460 can match the average model to the image to be processed by a Hough transform or the like to obtain a heart chamber three-dimensional mesh model roughly matched with the image to be processed.
  • the matching module 460 can obtain information such as parameters required by the Hough transform by means of the obtaining module 410, the storage module 430, or a user input.
  • the matching module 460 may send the matched heart chamber three-dimensional mesh model to the storage module 430 for storage, or may send it to the adjustment module 470 for further optimization processing.
  • the adjustment module 470 can be used to optimize the model to bring the model closer to the real heart (cardiac image data to be processed).
  • the adjustment module 470 can obtain a roughly matched cardiac chamber mesh model from the matching module 460 or the storage module 430.
  • the adjustment module 470 can determine the optimal heart chamber edge based on the probability that a range of data points on the edge of the chamber on the resulting cardiac model belong to the edge of the chamber.
  • the adjustment module 470 can further accurately adjust the heart chamber three-dimensional mesh model.
  • the fine adjustments may include similarity transformations, piecewise affine transformations, and/or energy function based microvariations, and the like.
  • the adjustment module 470 can perform image form conversion on the precisely adjusted cardiac chamber three-dimensional mesh model to obtain a heart chamber edge segmentation map (as shown in FIG. 26).
  • the adjustment module 470 can send the precisely adjusted heart chamber model or heart chamber segmentation map to the storage module 430 for storage, or can be sent to the display 280 for display.
  • processing device 130 is merely for convenience of description, and the present application is not limited to the scope of the embodiments. It can be understood that, after understanding the working principle of the device, those skilled in the art may perform any combination of the modules without departing from the principle, or form a subsystem to be connected with other modules. Various modifications and changes are made in the form and details of the above devices. For example, model building module 440 and/or training module 450 can be eliminated or merged with storage module 430. Variations such as these are within the scope of the present application.
  • FIG. 5 is an exemplary flow diagram of an implementation processing device shown in accordance with some embodiments of the present application.
  • image data can be acquired.
  • step 510 can be implemented by acquisition module 410.
  • the image data may be obtained from imaging device 110, database 120, or external to control and processing system 100.
  • the image data may include raw image data acquired by CT, positron emission tomography (PET), single photon emission tomography (SPECT), MRI (magnetic resonance imaging), ultrasound (Ultrasound), and other medical imaging equipment.
  • PET positron emission tomography
  • SPECT single photon emission tomography
  • MRI magnetic resonance imaging
  • ultrasound Ultrasound
  • the image data may be local raw image data of the heart or heart.
  • step 510 can include pre-processing the acquired cardiac raw image data and transmitting the pre-processed raw image data to image reconstruction module 420 or storage module 430 in processing device 130.
  • the pre-processing may include distortion correction, denoising, smoothing, enhancement, etc. of the image.
  • the heart image can be reconstructed from the heart image data.
  • This step can be accomplished by image reconstruction module 420 in processing device 130 based on image reconstruction techniques.
  • the cardiac image data can be acquired by the acquisition module 410 or the storage module 430.
  • the cardiac image may include an omnidirectional digitized heart map, a digitized cardiac tomogram, a cardiac phase contrast map, an X-ray cardiac image (CR) map, a multimodal cardiac image, and the like.
  • the heart image may be a two-dimensional image or a three-dimensional image.
  • the format of the heart image may include a JPEG format, a TIFF format, a GIF format, an FPX format, and the like.
  • the image reconstruction technique may include a decomposed simultaneous equations method, a Fourier transform reconstruction method, a direct back projection reconstruction method, a filtered back projection reconstruction method, a Fourier back projection reconstruction method or a convolution back projection reconstruction method, an iterative reconstruction method, and the like.
  • step 520 can pre-process the acquired cardiac image data and obtain a plurality of heart sections or projections.
  • the acquired cardiac image data or pre-processed cardiac image data can include a plurality of heart cross-sectional views.
  • the image reconstruction module 420 can reconstruct the heart image or model based on information provided by a series of heart sections.
  • the information provided by the heart cross-sectional view may include information such as tissue density at different parts of the heart, ability to absorb radiation, and the like.
  • the reconstructed heart image can be displayed by display 280 or by storage module 430.
  • the reconstructed cardiac image may also be further image processed by model building module 440 in processing device 130.
  • a three-dimensional cardiac average mesh model can be constructed. This step can be done by the model building module 440 in the processing device 130 according to a plurality of reference models.
  • Step 530 can acquire a plurality of reference models by means of module 410, storage module 430, or user input.
  • step 530 can include image registration of a plurality of reference models.
  • the image registration may include grayscale image registration, transform domain based image registration, feature based image registration, and the like.
  • the features may include feature points, feature regions, feature edges, and the like.
  • the plurality of reference models may be heart chamber segmentation data or a reference model that has been labeled by the user through the edge of the chamber.
  • the average model may include a calculated model such as a Point Distribution Model (PDM), an Active Shape Model (ASM), an Active Contour Model (also called Snakes), and an Active Appearance Model (AAM).
  • step 530 can include determining a relationship between the various chambers on the constructed average model from the chamber edge data on the plurality of reference models and establishing a correlation factor two-dimensional matrix.
  • a three-dimensional cardiac average mesh model or an average model containing correlation factor information may be sent directly by processor 220 to processing device 130.
  • the storage module 430 performs storage or sends it to the matching module 460 for further processing.
  • Heart image data can be matched to the three-dimensional cardiac mean mesh model in step 540. Further, the matching can include matching the first edge in the cardiac image data to the second edge of the three-dimensional cardiac average mesh model.
  • the first edge can include an outer edge and an inner edge.
  • the outer edge may be an outer contour of the heart
  • the inner edge may be an inner chamber contour of the heart
  • the outer contour and the inner chamber contour may be filled by heart tissue.
  • the second edge of the three-dimensional cardiac average mesh model may also include an outer edge and an inner edge.
  • the second edge outer edge corresponds to an edge of the outer contour of the heart
  • the second edge inner edge corresponding to an edge of the inner chamber contour of the heart.
  • the outer and inner edges may refer to edges for coarse matching and exact matching, respectively.
  • the outer and inner edges are not necessarily geometrically internal or external. relationship.
  • the edges for rough matching may be on the outside, inside, or the same side of the edge for precise matching.
  • the edges for coarse matching may overlap or intersect with the edges for exact matching.
  • the matching of the cardiac image data to the three-dimensional cardiac average model may be a match of an outer edge of the first edge of the cardiac image data with an outer edge of the second edge of the three-dimensional cardiac mean mesh model.
  • step 540 can be accomplished by matching module 460 through an image matching method.
  • the image matching method may include an NNDR-based matching method, a proximity feature point search algorithm, a Hough transform-based target detection, and the like.
  • the heart average model established by the model building block 440 can be matched to the first edge on the heart image data processed by the image reconstruction module 420 by a generalized Hough transform, and a matched heart model is obtained.
  • the first edge has a plurality of points, and a plurality of points of the first edge are classified according to image features to acquire a point classifier.
  • the image features can be related to the degree of sharpness and location.
  • the weighted generalized Hough transform can be implemented based on the probability that each point on the heart image data to be matched belongs to an edge.
  • the probability may be calculated according to the first classifier trained by the training module 450, and each point on the heart image data to be matched is input to the classifier.
  • the first classifier can be acquired based on a point classifier.
  • the point classifier may be obtained by classifying a plurality of points of the first edge according to image features.
  • an edge probability map of the heart to be matched can be constructed based on the probability of the points on the resulting heart to be matched.
  • the edge probability map may include a grayscale gradient map, a color gradient map (as shown in FIG. 24), and the like.
  • the heart image may be pre-processed prior to calculating the probability of each point on the heart image data to be matched as an edge.
  • the matching module 460 may send the matched heart model or the three-dimensional heart mesh model to the storage module 430 for storage, or may send the adjustment module 470 to the optimization module 470 for further optimization processing.
  • a precisely adjusted heart chamber segmentation map can be obtained. This step can be accomplished by the adjustment module 470 in the processing device 130.
  • the adjustment module 470 can adjust the chamber edge point (the inner edge of the second edge) on the model to match the inner edge of the first edge in the cardiac image data.
  • step 550 can determine an edge target point based on a chamber edge on the matched three-dimensional heart mesh model.
  • the edge target points may be determined based on the probability of a second edge point within a range of chamber edges on the matched three-dimensional heart mesh model.
  • the probability may be calculated using a second classifier based on the second edge point training.
  • the probability may invoke a first classifier calculation based on a plurality of reference models or average model training.
  • step 550 can deform the three-dimensional cardiac mesh model based on the determined edge target points to obtain a further adjusted three-dimensional cardiac mesh model of the chamber edges.
  • the deformations may include similarity transformations, affine transformations, and other image micro-deformation methods, and the like. For example, in some embodiments, similarity transformations, piecewise affine transformations, and/or energy function based micro-variations may be performed sequentially based on the determined edge target points.
  • the adjustment module 470 in the processing device 130 can convert the adjusted three-dimensional heart mesh model into a heart chamber segmentation image (as shown in FIG.
  • the adjustment module 470 in the processing device 130 can transmit the precisely adjusted heart chamber model or heart chamber segmentation map to the storage module 430 for storage, or can be sent to the display 280 for display.
  • the above description of the process of dividing the chamber by the processing device 130 is merely for convenience of description, and the present application is not limited to the scope of the embodiments. It will be understood by those skilled in the art that after understanding the working principle of the device, it is possible to arbitrarily adjust the order of the steps or add and delete certain steps without departing from the principle.
  • the step of constructing the average model 530 can be removed.
  • the adjustment module 470 can perform one or two of the above-described several modifications on the mesh model, or adopt other forms of micro-variation. Variations such as these are within the scope of the present application.
  • the model construction module 440 can include an acquisition unit 610, a registration unit 620, an annotation unit 630, a model generation unit 640, and an association factor generation unit 650.
  • the connection between modules in the model building module 440 may be wired, wireless, or a combination of the two. Any module can be local, remote, or a combination of both.
  • the obtaining unit 610 can be configured to acquire a plurality of reference models.
  • the obtaining unit 610 can pass through the database 120, The above information is obtained by means of a storage device or user input outside the control and processing system 100. Its function can be implemented by the processor 220 in FIG.
  • the plurality of reference models can include cardiac image data scanned by a patient at different times, at different locations, at different angles.
  • the plurality of sets of cardiac data can include cardiac image data scanned by different patients at different locations and at different angles.
  • the obtaining unit 610 can also be used to acquire information such as modeling algorithms, parameters, and the like.
  • the obtaining unit 610 may send the acquired multiple reference models and/or other information to the registration unit 620, the labeling unit 630, the average model generating unit 640, or the correlation factor generating unit 650.
  • the registration unit 620 can be configured to adjust the plurality of reference models acquired by the obtaining unit 610 by the image registration method, and make the positions, ratios, and the like of the plurality of reference models consistent.
  • the image registration may include spatial dimensional registration, feature registration, transformation based registration, optimization algorithm based registration, image based modal registration, body based registration, and the like.
  • the registration unit 620 can register multiple reference models into one and the same coordinate system.
  • the registration unit 620 may send the registered multiple reference models to the storage module 430 for storage, or may send them to the labeling unit 630 and/or the average model generation unit 640 for further processing.
  • the labeling unit 630 can be used to label a plurality of data points (also referred to as point sets) at the edge of the chamber of the plurality of reference models.
  • the cardiac image or model may be a plurality of reference models after the image registration by the registration unit 620, or may be an average model constructed by the average model generation unit 640.
  • the chamber edges may be manually labeled by a user on a plurality of reference models after image registration by the registration unit 620.
  • the chamber edges can be automatically labeled by the labeling unit 630 based on distinct chamber edge features.
  • the labeling unit 630 can divide the entire cardiac image or model in the plurality of reference models into six portions according to the chamber, namely the left ventricle, the left atrium, the right ventricle, the right atrium, the myocardium, and the aorta. In some embodiments, the labeling unit 630 can divide the entire cardiac image or model on the plurality of reference models into sharp and non-sharp classes based on the degree of change in the edge of the chamber on the reference model (also referred to as a gradient).
  • the labeling unit 630 may mark the edge points of several chambers connected to the outside or with a small degree of external change as sharp, and the marks that are connected to other internal chambers or have a greater degree of change to the outside are not sharp.
  • the class is shown by the two arrows in Figure 17.
  • the labeling unit 630 can divide the entire cardiac image or model on the plurality of reference models into 10 categories: left ventricular margin, left atrial sharp edge, left atrial non-sharp edge, right ventricular sharp edge, right ventricular non-sharp edge, The sharp edge of the right atrium, the non-sharp edge of the right atrium, the edge of the aorta, the sharp edge of the left myocardium, and the non-sharp edge of the left myocardium (as shown in Figure 18).
  • the labeling unit 630 can register a plurality of reference models into one same coordinate system, and label multiple references by comparing the positions of the points on the average model obtained by the plurality of reference models and the average model generating unit 640.
  • the edge of the chamber on the model can average the mode
  • the category to which the point closest to the corresponding point on the reference model belongs is the category of the point on the reference model.
  • the labeling unit 630 can send the plurality of reference models labeled with the set of chamber edge points to the storage module 430 for storage, or can be sent to the training module 450, the average model generating unit 640, and/or the correlation factor generating unit 650 for further processing or use. For calculation.
  • the average model generation unit 640 can be used to construct a three-dimensional cardiac average mesh model.
  • the average model generation unit 640 can extract the plurality of reference models in the annotated or the chamber edges in the average model, and obtain a plurality of references by processing the chamber edge models in each of the reference models or the average model.
  • the mesh model is calculated and the average mesh model is calculated by the image model construction method.
  • the image model construction method may include a Point Distribution Model (PDM), an Active Shape Model (ASM), an Active Contour Model (also called Snakes), an Active Appearance Model (AAM), and the like.
  • the average model generation unit 640 can divide the entire heart average model after the chamber annotation into six independent or combined sub-models.
  • the average model generation unit 640 can extract a plurality of chamber edges and determine a distribution of control points on the plurality of chamber edges to form a network by connecting the control points.
  • the average model generation unit 640 can obtain an average mesh model of the heart chamber by the ASM modeling method based on the mesh model, and corresponding feature values, feature vectors, and the like.
  • the average model generation unit 640 can add the influence of the correlation factor on the control points in the average model calculation.
  • the average model generation unit 640 may calculate the adjustment result of the control point using a weighted average (ie, ⁇ (F i *W i )), where F i is a deformation parameter of a certain chamber, W i The influence coefficient or weight value of the chamber to the control point.
  • the average model generation unit 640 can transmit the obtained three-dimensional cardiac average mesh model to the storage module 430 for storage or association factor generation unit 650 for calculation.
  • the average model generation unit 640 can also send the obtained three-dimensional cardiac average mesh model to the training module 450 and/or the matching module 460 for further processing.
  • the correlation factor generation unit 650 can be used to establish a relationship between each chamber and a control point on the average mesh model.
  • the relationship may be a two-dimensional correlation factor matrix of cells and control points as a matrix, the values of the matrix may represent the coefficient of influence or weight of each chamber to each control point.
  • the value of the matrix can be any real number between 0-1.
  • an example two-dimensional correlation factor matrix can look like this:
  • Control point 1 1 0 0 0 0 0 0 0 Control point 2 0.8 0.2 0 0 0 0 0
  • control point 1 belongs to the left ventricle, but is not in the connection part of the left ventricle and the atrium, so only the influence coefficient of the left ventricle is 1, and the influence coefficient of other chambers is 0.
  • Control point 2 belongs to the left ventricle, but it is located at the junction of the left ventricle and the left atrium, so the influence coefficient of the left ventricle on it is 0.8, and the influence coefficient of the left atrium is 0.2.
  • the correlation factor generation unit 650 can establish a correlation factor matrix based on the chamber assignment of the control points on the mesh model and the positional relationship of the control points with other chambers. In some embodiments, the correlation factor generation unit 650 can calculate the range of influence or the influence coefficient of the correlation factor based on the distance of the control point from the other chambers. For example, the correlation factor generation unit 650 can control the calculation of the correlation factor influence coefficient by controlling the maximum distance of the point from the other chambers. In some embodiments, the correlation factor generation unit 650 can adjust the range of influence and the influence coefficient between different chambers according to the degree of tightness between the chambers.
  • the correlation factor generation unit 650 may send the obtained two-dimensional correlation factor matrix to the storage module 430 for storage, or may send it to the average model generation unit 640 and/or the adjustment module 470 for weighting calculation.
  • model building module 440 is merely for convenience of description, and the present application is not limited to the scope of the embodiments. It will be understood that, after understanding the working principle of the module, the components of the module may be arbitrarily combined or connected to other units without departing from the principle. Various modifications and changes are made to the form and details of implementing the above modules.
  • the registration unit 620 and/or the labeling unit 630 can be removed or merged with the acquisition unit 610 and the storage module 430.
  • the plurality of reference models or average models can include cardiac data or models that have been edge labeled by a user.
  • the plurality of reference models or average models can include cardiac data that has undergone coarse or fine chamber segmentation. Variations such as these are within the scope of the present application.
  • a plurality of cardiac reference models can be acquired.
  • the plurality of cardiac reference models may be acquired by a database 120, a user input, or a storage device outside of the control and processing system 100.
  • the plurality of cardiac reference models can include cardiac image data scanned by a patient at different times, at different locations, at different angles.
  • the plurality of cardiac reference models can include cardiac image data scanned by different patients at different locations and at different angles.
  • the plurality of cardiac reference models can include cardiac data or models that have been edge labeled by an expert.
  • the plurality of cardiac reference models can include cardiac data that has undergone coarse or fine chamber segmentation.
  • image registration may be performed on the acquired plurality of reference models. This step can be done by the registration unit 620 in the model building module 440.
  • any two reference models may be transformed into the same coordinate system by means of translation, rotation, scaling, etc., and the points corresponding to the same position in the two reference models are one-to-one correspondence, thereby implementing information. Fusion.
  • the image registration may include spatial dimensional registration, feature registration based, transform property registration, optimization algorithm based registration, image based modal registration, body based registration, and the like.
  • the space-based dimension registration may include 2D/2D registration, 2D/3D registration, or 3D/3D registration.
  • the feature-based registration may include registration based on feature points (eg, discontinuities, inflection points of graphics, line intersections, etc.), registration based on area regions (eg, curves, surfaces, etc.), pixel-valued registration, based on external Feature registration, etc.
  • the transformation based registration may include rigid transformation registration, affine based transformation registration, projection based transformation registration, and/or curve based transformation registration, and the like.
  • the optimization algorithm based registration may include gradient descent based registration, Newton based registration, Powell based registration, genetic algorithm based registration, and the like.
  • the image based modal registration may include single mode registration and/or multimodal registration.
  • the subject-based registration may include registration based on images from the same patient, registration based on images from different patients, and/or registration based on patient data and maps.
  • the chamber edges may be labeled on the registered plurality of reference models. This step 730 can be accomplished by the tagging unit 630 in the model building module 440.
  • the chamber edge points can be manually labeled on the plurality of cardiac reference models by the user, and the set of edge points formed on each reference model can divide the heart into six parts, the left ventricle and the left atrium, respectively. , right ventricle, right atrium, myocardium and aorta.
  • the heart can be divided into 10 categories according to the degree of change of the chamber edge relative to the exterior and interior: left ventricular margin, left atrial sharp edge, left atrial non-sharp edge, right ventricular sharp edge, right ventricle Non-sharp edges, right atrial sharp edges, right atrial non-sharp edges, aortic margins, left myocardium sharp edges, and left myocardium non-sharp edges (as shown in Figure 18).
  • the sharp edge may mean that the edge of the chamber is connected to the outside or does not change significantly.
  • the non-sharp can refer to the edge of the chamber being connected to or changing from the interior or other chamber.
  • control points on a plurality of reference models can be determined. This step can be accomplished by the average model generation unit 640 in the model building block 440 based on a plurality of reference models that are annotated through image registration and chamber edges.
  • each cavity can be determined from image registration results and chamber edge annotation information of a plurality of reference models
  • the shaft may be the direction of the connection of any two points specified on the chamber.
  • the determined axis may be the long axis formed by the line connecting the two points furthest away from the chamber.
  • a plurality of reference model-labeled chamber edges may be separately extracted, each chamber being sliced along a cross-sectional direction of the determined axis on each chamber, and at the edge of the slice according to cross-section and surface features
  • a dense set of points is formed to form a point model of the average model (as shown in Figure 19).
  • the control points on the various chambers can be determined from the point model.
  • the control point can be a subset of a point set on the point model. For example, the larger the subset, the larger the mesh model, the larger the amount of computation in the heart segmentation process, and the better the segmentation effect; the smaller the selected subset, the smaller the mesh model, and the calculation in the heart segmentation process.
  • the number of control points on the chamber can vary. For example, in the coarse segmentation phase, the number of control points can be reduced to quickly locate the edge of the chamber; in the fine segmentation phase, the number of control points can be larger, thereby achieving fine segmentation of the edge of the chamber.
  • a heart average mesh model can be constructed from the control points.
  • step 750 can join the different points into a polygonal network based on the relationship between the control points.
  • a triangular network can be formed by connecting adjacent control points on adjacent slices.
  • the average mesh model can be obtained by an image warping method.
  • the image deformation method may include a Point Distribution Model (PDM), an Active Shape Model (ASM), an Active Contour Model (also called Snakes), an Active Appearance Model (AAM), and the like.
  • PDM Point Distribution Model
  • ASM Active Shape Model
  • AAM Active Contour Model
  • an average mesh model of a plurality of cardiac reference models shown in FIG.
  • step 750 can perform a weighted average model calculation on the control point mesh model based on the two-dimensional correlation factor matrix.
  • the average model generation unit 640 may calculate the adjustment result of the control point using a weighted average (ie, ⁇ (F i *W i )), where F i is a deformation parameter of a certain chamber, W i The influence coefficient or weight value of the chamber to the control point.
  • step 710 and step 720 can be combined.
  • steps 730 through 750 can be cycled multiple times. Variations such as these are within the scope of the present application.
  • the training module 450 can include a classification unit 810 and a classifier generation unit 820.
  • the connection between modules in the model building module 440 may be wired, wireless, or a combination of the two. Any module can be local, remote, or a combination of both.
  • Classification unit 810 can be used to divide a plurality of reference models or possible chamber edge points on the average model into different chamber categories. This functionality can be implemented by processor 220. In some embodiments, the classification unit 810 can classify possible edge points on the reference model or the average model according to the chamber categories divided by the labeling unit 630 (as shown in FIG. 22).
  • the classification unit 810 can divide the possible edge points near the upper model of the reference model or the average model into 10 chamber categories: left ventricular margin, left atrial sharp edge, left atrial non-sharp edge, right ventricular sharpness The rim, the right ventricular non-sharp edge, the right atrium sharp edge, the right atrium non-sharp edge, the aortic edge, the left myocardium sharp edge, and the left myocardium non-sharp edge.
  • the classification can be implemented by various classification methods, including but not limited to decision tree classification algorithm, Bayes classification algorithm, artificial neural network (ANN) classification algorithm, k-proximity (kNN), support vector machine (SVM). ), classification algorithms based on association rules, integrated learning classification algorithms, etc.
  • the classification unit 810 can divide points near the edge of the chamber into positive and negative samples based on the distance from the point near the edge of the chamber to the edge of the chamber.
  • the positive sample can be a data point within a certain threshold range from the edge of the chamber
  • the negative sample can be a data point that is further from the edge and other random locations in the space.
  • the classification unit 810 may send the classification result or data of the possible edge points on the plurality of reference models or the average model to the storage module 430 for storage, or may send the classification result to the classifier generation unit 820 for further processing.
  • the classifier generation unit 820 can be used to acquire a trained classifier.
  • the classifier generating unit 820 may perform classifier training on the plurality of reference models or edge points on the average model according to the edge point categories divided by the classifying unit 810, and obtain the trained classifier (as shown in FIG. 23). Show).
  • the classifier generation unit 820 can utilize the PBT training classifier.
  • the trained classifier may output a probability corresponding to the coordinate point after receiving any one of the coordinate points. The probability refers to the probability that a point is the edge of the chamber.
  • the classifier generation unit 820 can send the trained classifier to the storage module 430 for storage, or can be sent to the matching module 460 and/or the adjustment module 470 for calculation.
  • the above description of the training module 450 is merely for convenience of description, and the present application is not limited to the scope of the embodiments. It will be understood that, after understanding the working principle of the module, the components of the module may be arbitrarily combined or connected to other units without departing from the principle. Various modifications and changes are made to the form and details of implementing the above modules.
  • the classification unit 810 can perform chamber partitioning on a plurality of reference models or average models such that the divided chamber categories are finer relative to the chamber categories of the label division. Variations such as these are within the scope of the present application.
  • the classification unit 810 in the training module 450 can acquire sample points in a plurality of reference models or average models.
  • the training module 450 can extract the chamber edges (as shown in FIG. 22) based on the plurality of reference models or the chamber segmentation results on the average model, and place a certain range of points near each chamber edge. As a positive sample, points that are farther from the edge of the chamber and other random locations in the space act as negative samples.
  • the chamber edges may range from 0.1 cm, 0.5 cm, 1 cm, 2 cm, and the like.
  • the classification unit 810 in the training module 450 can classify the acquired positive and negative sample points.
  • the training module 450 can add positive and negative sample points to different chamber categories according to a classification method.
  • the positive samples may be points within a certain range of the average model edge, and the negative samples may be points outside the range of some average model edges.
  • a certain range of average model edges can be set to zero, at which point the positive sample is the average model edge point.
  • positive and negative samples may be classified based on the degree of sharpness and the location of the sample points.
  • the sample point is located at a chamber to which the positive and negative samples belong.
  • the training module 450 can divide the positive and negative sample points into 10 chamber categories according to the labeled chamber categories: left ventricular margin, left atrial sharp edge, left atrial non-sharp edge, right ventricular sharp edge, right ventricular non- Sharp edges, sharp edges of the right atrium, non-sharp edges of the right atrium, aortic margins, sharp edges of the left myocardium, and non-sharp edges of the left myocardium.
  • the classification method may include a decision tree classification algorithm, a Bayes classification algorithm, an artificial neural network (ANN) classification algorithm, a k-proximity (kNN), a support vector machine (SVM), an association rule-based classification algorithm, Integrated learning classification algorithms, etc.
  • the decision tree classification algorithm may include ID3, C4.5, C5.0, CART, PUBLIC, SLIQ, SPRINT algorithms, and the like.
  • the Bayesian classification algorithm may include a naive Bayesian algorithm, a TAN algorithm (tree augmented Bayes network), and the like.
  • the artificial neural network classification algorithm may include a BP network, a radial-based RBF network, a Hopfield network, a random neural network (such as a Boltzmann machine), a competitive neural network (such as a Hamming network, a self-organizing mapping network, etc.).
  • Classification algorithms based on association rules may include CBA, ADT, CMAR, and the like.
  • the integrated learning classification algorithm may include Bagging, Boosting, AdpBoosting, PBT, and the like.
  • the training module 450 can acquire the classifier trained by the classification.
  • the classifier generation unit 820 in the training module 450 can train the above-described sample point categories through the PBT algorithm and obtain one or more trained classifiers (as shown in FIG. 23).
  • the PBT may include a two-level PBT algorithm or a multi-level PBT algorithm.
  • the classifier may include one or more classifiers (also referred to as "first classifiers”) trained as positive samples with a plurality of reference models or points within a certain range of average model edges.
  • the classifier may include one or more classifiers (also referred to as "second classifiers”) trained as positive samples at a certain range of edges of the image to be processed.
  • steps 910 and 920 may not distinguish between positive and negative samples and directly classify all points near the edge of the chamber.
  • the maximum distance between the positive and negative sample points from the edge of the chamber can be 2 cm. Variations such as these are within the scope of the present application.
  • the matching module 460 can include an acquisition unit 1010, an image point extraction unit 1020, a Hough transform unit 1030, and a model matching unit 1040.
  • the connection manner between the units in the matching module 460 may be wired, wireless, or a combination of the two. Any unit can be local, remote, or a combination of both.
  • the acquisition unit 1010 can acquire an image.
  • the acquired image is an image to be processed.
  • the image may be an image reconstructed based on image data.
  • the reconstructed image may be obtained from other modules of processing device 130.
  • the reconstructed image may be acquired by the acquisition unit 1010 from the image reconstruction module 420.
  • the reconstructed image may be an image stored in the storage module 430 after the image reconstruction module 420 reconstructs the image.
  • the image may be an image that is input into the system via an external device.
  • an external device inputs an image into the system through communication port 250.
  • the acquisition unit 1010 can obtain an average model.
  • the average model may be a three-dimensional cardiac average mesh model generated by the average model generation unit 640.
  • the obtaining unit 1010 can acquire the first classifier trained by the training module 450.
  • the first classifier can be acquired based on a point classifier.
  • the image features can be related to the degree of sharpness and location.
  • the obtaining unit 1010 can acquire parameters required by the model matching module 460 to perform image matching.
  • the obtaining unit 1010 can acquire parameters for the generalized Hough transform.
  • the parameters of the generalized Hough transform may be derived based on a three-dimensional average mesh model and its chamber edge control points. For example, by determining the centroid of the average model edge, calculating the offset of all control points relative to the centroid on the edge of the average model and the gradient direction relative to the centroid, the offset vector of the control points corresponding to each gradient direction can be obtained (below Called the gradient vector).
  • the average model can be placed in an xyz coordinate system and the coordinates of each gradient vector in the xyz coordinate system determined.
  • the coordinates of each gradient vector can be converted to coordinates in a polar coordinate system.
  • the angle between the projection of the gradient vector in the xy plane and the x coordinate axis may be taken as the first angle ⁇ , and the value ranges from -180 degrees to 180 degrees.
  • the angle between the gradient vector and the xy plane can be taken as the second angle The range is from -90 degrees to 90 degrees.
  • the two angles ⁇ representing the gradient vector described above may be Discretization is performed to obtain a table (also referred to as an R-table) as described below.
  • the offsets on the R-table can be scaled or rotated at different angles to detect shapes of different sizes or different angles.
  • the image point extraction unit 1020 can acquire an edge probability map of the image to be processed. Specifically, in some embodiments, the image point extracting unit 1020 may calculate the probability that each point on the image to be processed is a chamber edge by inputting the coordinates of the point on the image to be processed into the classifier acquired by the acquiring unit 1010, and An edge probability map of the image to be processed is obtained according to the probability distribution of each point.
  • the edge probability map may include a grayscale gradient map, a color gradient map (as shown in FIG. 24), and the like.
  • the image point extraction unit 1020 may use a point on the edge probability map of the image to be processed that has a probability value greater than a certain threshold as the first edge point.
  • the threshold may be any real number between 0-1, for example, 0.3, 0.5, and the like.
  • the model matching unit 1030 can match the average model to the image to be processed.
  • the model matching unit 1030 can match the average model to the edge probability map of the image to be processed by a weighted generalized Hough transform.
  • the weighted generalized Hough transform may include obtaining all possible edge reference points on the image to be processed according to the first edge point and the R-table on the image to be processed, and determining the probability accumulation value of all the edge reference points by weighted accumulation method.
  • the edge reference point with the largest probability accumulation value is taken as the centroid of the image.
  • the transformation parameter of the model centroid to the image centroid is used as the transformation parameter of the model.
  • the edge reference point may be obtained by performing coordinate transformation on the first edge point of the image to be processed according to parameters in the R-table.
  • the weighted accumulation may be a process of accumulating first edge point probabilities corresponding to the same edge reference point (refer to the behavior of the first edge point falling to the same edge reference point after the parameter on the R-table is offset).
  • the centroid of the model can be transformed to a position coincident with the centroid of the image according to the transformation parameters.
  • the transformation parameters can include rotation angles and scaling Proportion and so on.
  • the model matching unit 1030 may perform rotation, scaling, and the like on the points on the model according to the determined transformation parameters, thereby obtaining a model matching the image to be processed (as shown in FIG. 25).
  • model matching module 460 is merely for convenience of description, and the present application is not limited to the scope of the embodiments. It can be understood that, after understanding the working principle of the module, any unit in the module can be arbitrarily combined or connected to other units without departing from the principle. Various modifications and changes are made to the form and details of implementing the above modules.
  • the image point extraction unit 1020 can be removed, and the edge probability map of the image to be processed can be directly obtained by the training module 450. Variations such as these are within the scope of the present application.
  • step 1110 an average model, a to-be-processed image, and a trained second classifier can be obtained.
  • the average model may be a three-dimensional cardiac average mesh model obtained by the average model generation unit 640 by the image model construction method based on the plurality of reference models.
  • the image model construction method may include a Point Distribution Model (PDM), an Active Shape Model (ASM), an Active Contour Model (also called Snakes), an Active Appearance Model (AAM), and the like.
  • Step 1110 can be implemented by acquisition unit 1010.
  • the image to be processed acquired by the acquisition unit 1010 may be an image reconstructed by the image reconstruction module 420.
  • step 1110 can obtain an R-table based on an average model.
  • step 1110 the parameters of the generalized Hough transform can be determined.
  • step 1110 may acquire a first edge point of the image to be processed based on an edge probability map of the image to be processed.
  • the first edge point may be a point on the edge probability map of the image to be processed whose probability is greater than a certain threshold, for example, the probability may be 0.3.
  • the edge probability map may calculate the probability of each point on the image to be processed as a chamber edge by using the coordinate input obtaining unit 1010 of the point on the image to be processed, and according to the probability of each point. The distribution is obtained.
  • the angle ⁇ corresponding to the gradient direction of the first edge point on the image to be processed may be calculated.
  • all edge reference points may be weighted and accumulated according to the number of edge reference point voting and the probability value corresponding to the first edge point.
  • the weighted accumulation may be to accumulate the probability of the first edge point corresponding to the same edge reference point.
  • the parameter in the R-table corresponding to the edge reference point with the largest probability accumulation value may be used as the transformation parameter of the image to be processed.
  • the transformation parameters may include a rotation angle, a scaling, and the like.
  • the weighted accumulation method can be expressed by a formula as:
  • i is the index of the first edge point
  • j is the index of the possible edge reference points voted on the voting image
  • p is the probability value of each first edge point
  • is a zero, 1 binary function, ie, The i first edge points have a value of 1 when the jth possible edge reference point has a vote contribution, and 0 otherwise.
  • a model corresponding to the image to be processed can be obtained.
  • the first edge point on the image to be processed may be transformed based on the determined weighted generalized Hough transform parameter.
  • the coordinates of the first edge point on the image to be processed may be transformed according to the angle and the scaling ratio in the R-table corresponding to the edge reference point, and the corresponding information on the average model is mapped to the image to be processed, and the average network is obtained.
  • the image to be processed corresponding to the grid model.
  • the adjustment module 470 can include an acquisition unit 1210, a target point determination unit 1220, and a model transformation unit 1230.
  • the connection manner between the units in the adjustment module 470 may be wired, wireless, or a combination of the two. Any unit can be local, remote, or a combination of both.
  • the obtaining unit 1210 can acquire the model and the trained second classifier. Specifically, the acquiring unit 1210 may acquire coordinate data of the second edge point on the model. In some embodiments, the second edge point of the model can be a control point on the model. In some embodiments, the obtaining unit 1210 can acquire the second classifier trained by the training module 450.
  • the classifier can be trained in a PBT classification algorithm based on 10 chamber categories divided by chamber and edge sharpness to obtain 10 classifiers, such as left ventricular margin, left atrial sharp edge, left atrial non-sharp edge, right ventricular sharpness
  • 10 classifiers such as left ventricular margin, left atrial sharp edge, left atrial non-sharp edge, right ventricular sharpness
  • the obtaining unit 1210 can acquire the model processed by the model transform unit 1230.
  • the target point determining unit 1220 may determine a target point corresponding to the second edge point on the model. Taking a second edge point on the model as an example, the target point determining unit 1220 may determine a plurality of candidate points around the second edge point of the one model. In some embodiments, the target point determining unit 1220 may input the determined plurality of candidate points around the one model second edge point into the classifier acquired by the obtaining unit 1210, and determine the one model second edge point and A plurality of candidate points around it correspond to a probability of an image edge, and a target point of the second edge point of the one model is determined according to the probability. In some embodiments, the target point determining unit 1220 can determine all of the models. The corresponding target point of the second edge point.
  • the model transformation unit 1230 can adjust the model.
  • the model transformation unit 1230 can adjust the position of the model edge point based on the target point determined by the target point determination unit 1220.
  • the adjustments may include similarity transformations, piecewise affine transformations, and/or energy function based micro-variations, and the like.
  • the model transformation unit 1230 can repeat the adjustment of the model multiple times, and each adjustment requires a re-determination of the target point.
  • the model transformation unit 1230 can determine whether the preset condition is satisfied after the model is adjusted. For example, whether the number of model adjustments reaches a certain threshold.
  • the model transformation unit 1230 can obtain a precisely adjusted heart chamber model.
  • the precisely adjusted heart chamber model can be very close to the real heart.
  • the adjustment module 470 is merely for convenience of description, and the present application is not limited to the scope of the embodiments. It will be understood that, after understanding the working principle of the module, the components of the module may be arbitrarily combined or connected to other units without departing from the principle. Various modifications and changes are made to the form and details of implementing the above modules.
  • the model transformation unit 1230 may preset the number of loops without determining the number of loops of the fine adjustment module 470 by threshold determination. Variations such as these are within the scope of the present application.
  • step 1310 a second edge point on the model and the trained classifier can be obtained.
  • the classifiers acquired by the obtaining unit 1210 and the obtaining unit 1010 are not of the same type.
  • the classifier acquired by the obtaining unit 1010 may be that the training module 450 takes a point within a certain range of the edge of the average mesh model as a positive sample training.
  • the classifier acquired by the acquiring unit 1210 may be obtained by training a point within a certain range of the edge of the image to be processed as a positive sample.
  • the classifier acquired by the obtaining unit 1010 may be a first classifier, and the classifier acquired by the obtaining unit 1210 may be a second classifier.
  • a target point of the second edge point on the model may be determined based on the second classifier.
  • step 1320 may input a candidate point within a certain range of the second edge point of the model into the second classifier, and obtain a probability that the candidate point within a certain range of the second edge point of the model belongs to the edge of the image.
  • the target point of the second edge point of the one model may be determined by the target point determining unit 1220 based on the determined probability.
  • the second edge point on the model can be the interior edge point on the model (the inner edge of the second edge) corresponding to the inner edge of the first edge of the cardiac image data.
  • the process of transforming the second edge point to the target point may be a process of accurately matching the inner edge of the interior of the model with the inner edge of the first edge of the cardiac image data.
  • the inner edge may refer to an edge for precise matching, and when the method disclosed herein is applied to other objects, organs or tissues, the inner edge is not necessarily geometrically internal or necessarily internal to the outer edge .
  • a second edge point on the model can be transformed to the target point based on the determined target point.
  • step 1330 can transform the second edge point of the model using a variety of transformations.
  • the second edge point of the model may be corrected by the model transformation unit 1230 using the similarity change and the affine transformation.
  • step 1340 it may be determined whether the adjustment result satisfies the preset condition.
  • the preset condition may be whether the number of adjustments reaches a certain threshold.
  • the threshold is adjustable.
  • the process proceeds to step 1350, and the model is accurately matched.
  • the process returns to step 1320, and the new model edge can be determined by the target point determination unit 1220 based on the new model edge point. Point to the corresponding target point.
  • FIG. 14 is an exemplary flow diagram of determining a target point, shown in accordance with some embodiments of the present application.
  • Flow 1400 can be implemented by target point determination unit 1220.
  • Fig. 14 is a process of determining a corresponding target point of a point on the edge of the average model, but those skilled in the art should understand that the method can be used to obtain a plurality of target points corresponding to a plurality of edge points.
  • flow 1400 can correspond to step 1320.
  • a normal to an average model edge point can be determined.
  • the direction of the normal is directed from the interior of the average model to the exterior. Specific normal acquisition methods can be found, for example, in flow 1500 and its description.
  • step 1420 the step size and the search range along the normal direction of the edge point of the average model can be obtained.
  • the step size and search range may be pre-set values.
  • the step size and search range may be user input.
  • the user can be input into the processing device 130 through the communication port 250 by an external device.
  • the search range is a line segment starting from the edge point of the one model and at least one of two directions along the line of the normal line (to the outside or the inside of the model).
  • one or more candidate points may be determined based on the step size and the search range.
  • the search range is 10 cm
  • the step size is set to 1 cm
  • 10 points can be determined in each direction of the line where the normal line is located, for a total of 21 candidate points (including the edge point itself).
  • the step size and the number of steps can also be determined, and the candidate points are determined based on the step size and the number of steps. For example, if the step size is set to 0.5 cm and the number of steps is set to 3, 3 points can be determined in each direction of the line where the normal line is located, and the farthest candidate point is 1.5 cm from the edge point, and a total of 7 candidate points.
  • a probability that the one or more candidate points correspond to a range of image edges can be determined.
  • the second classifier is trained to take a point within a certain range of the image edge as a positive sample.
  • the certain range may be a preset value set by the machine or the user.
  • the preset value may be 1 cm.
  • one of the one or more candidate points may be determined to be a target point based on a probability that the one or more candidate points correspond to a range of image edges.
  • the target point can be obtained based on the following function:
  • P i is the probability that the candidate point corresponds to a certain range of the image edge
  • d i is the Euclidean distance between the candidate point and the edge point of the average model
  • is the weight, which is a constant used to balance the relationship between the distance and the probability value.
  • a plurality of target points of the plurality of model edge points may be determined based on the process 1400, and then the plurality of model edge points and the model may be transformed according to the plurality of target points.
  • a specific transformation process can be seen, for example, in Figure 16 and its description.
  • step 15 is an exemplary flow chart for determining an edge point normal as shown in some embodiments of the present application.
  • the process 1500 can correspond to step 1420.
  • a plurality of polygons may be determined from a plurality of edge points of the average model.
  • the plurality of polygons may be formed by joining the plurality of edge points.
  • the plurality of polygons may be in the shape of a triangle, a quadrangle, a polygon, or the like.
  • the process of determining a plurality of polygons from a plurality of edge points may also be referred to as a meshing process.
  • the plurality of polygons may be referred to as a grid, and the plurality of edge points may be referred to as nodes.
  • the average model surface may have formed a plurality of polygons corresponding to the average model edge points, in which case step 1510 may be omitted.
  • step 1520 a plurality of polygons adjacent to an average model edge point may be determined.
  • a plurality of normals corresponding to the planes to which the plurality of polygons belong may be determined.
  • the plurality of normal directions corresponding to the planes of the plurality of polygons are located on the same side (outside or inside the average model).
  • the plurality of normal vectors corresponding to the associated planes of the plurality of polygons are unit vectors.
  • a normal to the edge point can be determined based on the plurality of normals.
  • a plurality of normal vectors corresponding to the plurality of polygons may be added or averaged.
  • 16 is an exemplary flow diagram of transforming average model edge points as shown in some embodiments of the present application.
  • the process 1600 can be a model transformation unit 1230 implementation.
  • a similarity transformation can be performed on the average model edge points.
  • the grid composed of the average model edge points can be taken as a whole, and the average model is transformed according to the direction of the target point determined by the edge points of the chamber, mainly including operations such as translation, rotation, and scaling.
  • a piecewise affine transformation can be performed on the average model edge points.
  • the grid of average model edge points can be divided according to certain rules.
  • the heart model can be divided according to the heart chamber.
  • the model mesh can be divided into six parts of the left ventricle, the left atrium, the right ventricle, the right atrium, the aorta, and the left myocardium according to the chamber.
  • segmental affine transformation refers to affine transformation of the meshes of the various portions of the partition.
  • the affine transformation may refer to performing a motion transformation and a shape transformation on a plurality of nodes of respective sections.
  • the average model edge point may be affected by multiple chambers.
  • the effect of the average model edge point on the influence of different chambers can be expressed in the form of a correlation factor.
  • the average model edge point can be converted toward the target point.
  • the average model edge points are affected by multiple chambers.
  • the correlation factor becomes the weight value of the conversion parameters (such as movement displacement, deformation ratio, etc.).
  • the model transformation unit 1230 converts the edge points on the average model multi-segment grid to their corresponding positions by segmental affine transformation.
  • an energy function based micro-variation can be performed on the average model edge point.
  • the energy function can be expressed as:
  • E ext is the external energy, indicating the relationship between the current point and the detected target point
  • E int is the internal energy, indicating the relationship between the current point and an edge point of the average model
  • is the weight, used to balance the internal and external energy Different chambers use different weights
  • c denotes each chamber.
  • the external energy function can be expressed as:
  • the internal energy function can be expressed as:
  • i is the point
  • j is the neighborhood of point i (then v i -v j corresponds to the edge of each triangle at the current point position);
  • w i,k is the correlation factor (the factor of each chamber k to the current point i)
  • m i is the point on the average model (determined by PDM/ASM);
  • m i -m j corresponds to the edge of each triangle of the mesh average model),
  • T affine,k is the affine transformation of each chamber k The transformation relationship obtained by PAT.
  • the point coordinates v i are all three-dimensional in space.
  • the present application uses specific words to describe embodiments of the present application.
  • a "one embodiment,” “an embodiment,” and/or “some embodiments” means a feature, structure, or feature associated with at least one embodiment of the present application. Therefore, it should be emphasized and noted that “an embodiment” or “an embodiment” or “an alternative embodiment” that is referred to in this specification two or more times in different positions does not necessarily refer to the same embodiment. . Furthermore, some of the features, structures, or characteristics of one or more embodiments of the present application can be combined as appropriate.
  • aspects of the present application can be illustrated and described by a number of patentable categories or conditions, including any new and useful process, machine, product, or combination of materials, or Any new and useful improvements. Accordingly, various aspects of the present application can be performed entirely by hardware, entirely by software (including firmware, resident software, microcode, etc.) or by a combination of hardware and software.
  • the above hardware or software may be referred to as a "data block,” “module,” “engine,” “unit,” “component,” or “system.”
  • aspects of the present application may be embodied in a computer product located in one or more computer readable medium(s) including a computer readable program code.
  • a computer readable signal medium may contain a propagated data signal containing a computer program code, for example, on a baseband or as part of a carrier.
  • the propagated signal may have a variety of manifestations, including electromagnetic forms, optical forms, and the like, or a suitable combination.
  • the computer readable signal medium may be any computer readable medium other than a computer readable storage medium that can be communicated, propagated, or transmitted for use by connection to an instruction execution system, apparatus, or device.
  • Program code located on a computer readable signal medium can be propagated through any suitable medium, including a radio, cable, fiber optic cable, RF, or similar medium, or a combination of any of the above.
  • the computer program code required for the operation of various parts of the application can be written in any one or more programming languages. Including object-oriented programming languages such as Java, Scala, Smalltalk, Eiffel, JADE, Emerald, C++, C#, VB.NET, Python, etc., conventional programming languages such as C, Visual Basic, Fortran 2003, Perl, COBOL 2002, PHP , ABAP, dynamic programming languages such as Python, Ruby, and Groovy, or other programming languages.
  • the program code can run entirely on the user's computer, or run as a stand-alone software package on the user's computer, or partially on the user's computer, partly on a remote computer, or entirely on a remote computer or server.
  • the remote computer can be connected to the user's computer via any network, such as a local area network (LAN) or wide area network (WAN), or connected to an external computer (eg via the Internet), or in a cloud computing environment, or as a service.
  • LAN local area network
  • WAN wide area network
  • an external computer eg via the Internet
  • SaaS software as a service

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)
  • Apparatus For Radiation Diagnosis (AREA)

Abstract

一种图像处理方法。该方法包括:获取图像数据(510);基于该图像数据,重建图像(520),其中,该图像包括一个或多个第一边缘;获取一个模型,其中,该模型包括与该一个或多个第一边缘相对应一个或多个第二边缘;匹配该模型与该重建后的图像(540);以及根据该一个或多个第一边缘,调整该模型的一个或多个第二边缘(550)。

Description

一种图像分割方法及系统 技术领域
本申请涉及一种图像处理方法,尤其涉及一种基于概率的图像匹配分割方法及系统。
背景技术
随着人类生活水平的提高和预期寿命的延长,心血管疾病成为人类的头号死因,因此心血管疾病的早期诊断可有效降低病死率。了解心脏结构的影像学表现及其功能数据是正确诊断心脏疾病的重要前提,CT技术的发展,明显提高了时间分辨力,减少了心脏搏动伪影,在显示心脏细微结构方面显示出良好的应用潜力。
图像分割技术是图像分析环节的关键技术,其在影像医学中发挥着越来越大的作用。图像分割是提取影像图像中特殊组织的定量信息的不可缺少的手段,同时也是可视化实现的预处理步骤和前提。分割后的图像正被广泛应用于各种场合,如组织容积的定量分析,诊断,病变组织的定位,解剖结构的学习,治疗规划,功能成像数据的局部体效应校正和计算机指导手术。
CT图像经重建获得后,需要对图像上的心脏腔室进行定位识别。心脏腔室的定位识别需要进行心脏边缘检测。可变模型是心脏腔室分割领域较为通用的做法。心脏腔室模型是基于多套临床心脏腔室模型对应的图像数据平均得到。通过模型与图像的匹配得到匹配的图像。
简述
根据本申请的一个方面,披露了一种方法。该方法包括:获取图像数据;基于该图像数据,重建图像,其中,该图像包括一个或多个第一边缘,所述第一边缘具有多个点;获取一个模型,其中,该模型包括与该一个或多个第一边缘相对应的一个或多个第二边缘;匹配该模型与该重建后的图像;以及根据该一个或多个第一边缘,调整该模型的一个或多个第二边缘。
在一些实施例中,图像数据包括脑部图像、颅骨图像、胸部图像、心脏图像、乳腺 图像、腹部图像、肾脏图像、肝脏图像、骨盆图像、会阴部图像、肢体图像、脊椎图像或椎骨图像。
在一些实施例中,获取模型包括获取多个参考模型;对获取的多个参考模型进行配准;确定配准后该多个参考模型上的多个控制点;基于该多个参考模型上的多个控制点,获得该模型的控制点;以及根据该模型的控制点,生成该模型。
在一些实施例中,该方法进一步包括根据模型的控制点与该模型中该一个或多个第二边缘的关系,生成该模型的控制点的关联因子。
在一些实施例中,调整模型的一个或多个第二边缘包括确定第二边缘上的一个参考点;确定与该参考点相对应的目标点;以及根据该目标点,调整模型的第二边缘。
在一些实施例中,确定与参考点相对应的目标点包括确定该参考点的法线;获取步长和搜索范围;根据该步长和搜索范围,沿法线确定一个或多个候选点;获取一个第一分类器;根据该第一分类器,确定该一个或多个候选点对应于所述第一边缘的概率;以及基于该一个或多个候选点对应于该第一边缘的概率,确定该目标点。
在一些实施例中,确定参考点的法线包括确定与该参考点相邻的一个或多个多边形网格;确定该一个或多个多边形网格对应的一个或多个法线;以及根据该一个或多个法线,确定该参考点的法线。
在一些实施例中,确定参考点的法线包括确定与该参考点相邻的一个或多个多边形网格;确定该一个或多个多边形网格对应的一个或多个法线;以及根据该一个或多个法线,确定该参考点的法线。
在一些实施例中,匹配模型与重建后的图像包括获取第二分类器;根据第二分类器,进行加权广义霍夫变换;以及根据加权广义霍夫变换的结果,匹配该模型和图像。
在一些实施例中,获取第一分类器包括获取点分类器,所述点分类器对所述第一边缘的多个点根据与锐利程度和所处位置相关的图像特征进行分类;获取点分类器分类后的多个点,其中至少一部分所述点分类器分类后的多个点位于第一边缘一定范围内;确定第一边缘一定范围内的点分类器分类后的多个点为正样本;确定第一边缘一定范围以外的点分类器分类后的多个点为负样本;对该正样本和负样本进行分类;以及根据分类后的正样本和负样本获得训练后的第一分类器。
在一些实施例中,获取第二分类器包括获取模型的多个点,其中至少一部分该多个点位于第二边缘一定范围内;确定第二边缘一定范围内的点为正样本;确定第二边缘一 定范围以外的点为负样本;根据锐利程度和所处位置,对该正样本和负样本进行分类;以及根据分类后的正样本和负样本获得训练后的第二分类器。
根据本申请的另一个方面,披露了一个系统。该系统包括一个被配置为存储数据和指令的存储器和一个与该存储器建立通信连接的处理器。当执行该存储器中的指令时,该处理器被配置为:获取图像数据;基于该图像数据,重建图像,其中,该图像包括一个或多个第一边缘,所述第一边缘具有多个点;获取一个模型,其中,该模型包括与该一个或多个第一边缘相对应的一个或多个第二边缘;匹配该模型与该重建后的图像;以及根据该一个或多个第一边缘,调整该模型的一个或多个第二边缘。
根据本申请的另一个方面,披露了一种带有计算机程序产品的永久的计算机可读媒质。该计算机程序产品包括多条指令。该多条指令被配置为:该方法包括:获取图像数据;基于该图像数据,重建图像,其中,该图像包括一个或多个第一边缘,所述第一边缘具有多个点;获取一个模型,其中,该模型包括与该一个或多个第一边缘相对应的一个或多个第二边缘;匹配该模型与该重建后的图像;以及根据该一个或多个第一边缘,调整该模型的一个或多个第二边缘。
附图描述
为了更清楚地说明本申请实施例的技术方案,下面将对实施例描述中所需要使用的附图作简单的介绍。显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域的普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图将本申请应用于其它类似情景。除非从语言环境中显而易见或另做说明,图中相同标号代表相同结构和操作。
图1是根据本申请的一些实施例所示的示例控制及处理系统的一种应用场景示意图;
图2是根据本申请的一些实施例所示的处理设备的一种示例系统配置的示意图;
图3是根据本申请的一些实施例所示的用于实施本申请中一些特定系统的一种示例移动设备示意图;
图4是根据本申请的一些实施例所示的示例处理设备的示意图;
图5是根据本申请的一些实施例所示的实施处理设备的示例性流程图;
图6是根据本申请的一些实施例所示的示例模型构建模块的示意图;
图7是根据本申请的一些实施例所示的构建平均模型的示例性流程图;
图8是根据本申请的一些实施例所示的示例训练模块的示意图;
图9是根据本申请的一些实施例所示的训练分类器的示例性流程图;
图10是根据本申请的一些实施例所示的示例模型匹配模块的结构示意图;
图11是根据本申请的一些实施例所示的匹配平均模型与重建的图像的示例性流程图;
图12是根据本申请的一些实施例所示的示例调整模块的结构示意图;
图13是根据本申请的一些实施例所示的调整模型的示例性流程图;
图14是根据本申请的一些实施例所示的确定目标点的示例性流程图;
图15是根据本申请的一些实施例所示的确定一个平均模型边缘点法线的示例性流程图;
图16是根据本申请的一些实施例所示的平均模型边缘点变换的示例性流程图;
图17是本申请的一个图像锐利程度示意图;
图18是本申请的一个图像边缘分类训练的实施例;
图19是本申请的一个模型网格分类的实施例;
图20是本申请的一个模型网格划分的实施例;
图21是本申请的基于关联因子的网格模型的实施例;
图22是基于锐利程度分类的图像边缘示例图;
图23是基于锐利程度分类的模型示例图;
图24是本实施例依据分类器获得的图像概率图实施例;
图25是经过霍夫变换后的平均网格模型和图像匹配的示例图;
图26是调整后精确匹配的图像腔室分割结果示例图;
图27A是未基于关联因子进行划分的图像分割图;以及
图27B是基于关联因子进行划分的图像分割图。
具体描述
如本说明书和权利要求中所示,除非上下文明确提示例外情形,“一”、“一个”、“一种”和/或“该”等词并非特指单数,也可包括复数。术语“包括”与“包含”仅提示包括 已明确标识的步骤和元素,而这些步骤和元素不构成一个排它性的罗列,方法或者设备也可能包括其它的步骤或元素。术语“基于”是“至少部分地基于”。术语“一个实施例”表示“至少一个实施例”;术语“另一实施例”表示“至少一个另外的实施例”。其他术语的相关定义将在下文描述中给出。
本申请中使用了流程图用来说明根据本申请的实施例的系统所执行的操作。应当理解的是,前面或下面操作不一定按照顺序来精确地执行。相反,可以按照倒序或同时处理各种步骤。同时,也可以将其他操作添加到这些过程中,或从这些过程移除某一步或数步操作。
本披露书描述了一种图像分割的流程,基于概率对图像边缘进行匹配,并基于关联因子的网格模型对模型边缘点进行变换,实现精确地匹配或分割效果。医学图像的匹配可包括对于一幅医学图像寻求一种或者是一系列的空间变换,使它与模型上的对应点达到空间上的一致。这种一致可以包括人体上的同一解剖点在匹配的图像和模型上有相同的空间位置。匹配的结果可以使图像上所有的解剖点和/或所有具有诊断意义的解剖点及手术感兴趣的点都达到匹配。
图1是根据本申请的一些实施例所示的示例控制及处理系统的一种应用场景示意图。如图1所示,控制及处理系统100包括一个成像设备110、一个数据库120和一个处理设备130。
成像设备110可以通过扫描一个目标物体生成图像。所述图像可以是各种医学图像。例如,头部图像、胸部图像、腹部图像、骨盆图像、会阴部图像、肢体图像、脊椎图像、椎骨图像等。其中,头部图像可以包括脑部图像、颅骨图像等。胸部图像可以包括整个胸部图像、心脏图像、乳腺图像等。腹部图像可以包括整个腹部图像、肾脏图像、肝脏图像等。心脏图像可以包括但不限于全方位数字化心脏图、数字化心脏层析X射线图、心脏相衬图、X射线影像(CR)图、多模态图像等。所述医学图像可以是二维图像或三维图像。所述医学图像的格式可以包括JPEG格式、TIFF格式、GIF格式、FPX格式等。所述医学图像可以存储在数据库120中,也可以传输至处理设备130进行图像处理。本申请将以心脏图像为例进行说明,但是本领域技术人员可以理解的是,本申请的方法也可以用于其它图像。
数据库120可以储存图像和/或图像相关的信息。所述图像和图像相关的信息可以由成像设备110和处理设备130提供,也可以从系统100外获取,例如,用户输入信息、 从网络获取信息等。所述图像相关的信息可以包括处理图像的算法、样本、模型、参数、处理过程中的实时数据等。数据库120可以是层次式数据库、网络式数据库或关系式数据库。数据库120可以是本地数据库,也可以是远程数据库。数据库120或系统内其他存储设备可以将信息数字化后再利用以电、光或磁等方式工作的存储设备加以存储。在一些实施例中,数据库120或系统内其他存储设备可以是利用电能方式存储信息的设备,例如随机存储器(RAM)、只读存储器(ROM)等。随机存储器可以包括但不限于十进计数管、选数管、延迟线存储器、威廉姆斯管、动态随机存储器(DRAM)、静态随机存储器(SRAM)、晶闸管随机存储器(T-RAM)、零电容随机存储器(Z-RAM)等中的一种或多种的组合。只读存储器包括但不限于磁泡存储器、磁钮线存储器、薄膜存储器、磁镀线存储器、磁芯内存、磁鼓存储器、光盘驱动器、硬盘、磁带、非挥发性存储器(NVRAM)、相变化内存、磁阻式随机存储式内存、铁电随机存储内存、非挥发性静态随机存储器、可编程只读存储器、屏蔽式堆读内存、浮动连接门随机存储器、纳米随机存储器、赛道内存、可变电阻式内存、可编程金属化单元等中的一种或多种的组合。在一些实施例中,数据库120或系统内其他存储设备可以是利用磁能方式存储信息的设备,例如硬盘、软盘、磁带、磁芯存储器、磁泡存储器、U盘、内存等。在一些实施例中,数据库120或系统内其他存储设备可以是利用光学方式存储信息的设备,例如CD、DVD等。在一些实施例中,数据库120可以是利用磁光方式存储信息的设备,例如磁光盘等。数据库120或系统内其他存储设备的存取方式可以是随机存储、串行访问存储、只读存储等中的一种或多种的组合。数据库120或系统内其他存储设备可以是非永久记忆存储器或永久记忆存储器。以上提及的存储设备只是列举了一些例子,数据库120可以使用的存储设备并不限于此。
数据库120可以是处理设备130的一部分,也可以是成像设备110的一部分,也可以独立于处理设备130和成像设备110存在。在一些实施例中,数据库120可以通过网络150与控制及处理系统100中的其他模块或设备连接。所述连接方式可以包括有线连接、无线连接或两者的结合。
处理设备130可以从成像设备110获取图像数据,也可以从数据库120获取图像数据。处理设备130可以对获取的图像实施多种处理。所述处理可以包括灰度直方图处理、归一化处理、几何变换、空间变换、图像平滑处理、图像增强处理、图像分割处理、图像变换处理、图像恢复、图像压缩、图像特征提取等。处理设备130可以将处理后的图像数据存储到数据库120,也可以传输到控制及处理系统100之外的设备上。
在一些实施例中,处理设备130可以包括一个或多个处理器、存储器等。例如,处理设备130可以包括中央处理器(CPU)、专用集成电路(ASIC)、专用指令集处理器(ASIP)、图像处理器(GPU)、物理运算处理器(PPU)、数字信号处理器(DSP)、现场可编程门阵列(FPGA)、可编程逻辑器件(PLD)、控制器、微控制单元、处理器、微处理器、高级RISC机器处理器等中的一种或多种的组合。
在一些实施例中,控制及处理系统100还可以包括一个终端设备140。所述终端设备可以与成像设备110、数据库120和处理设备130进行信息交互。例如,所述终端设备140可以从处理设备130中获取处理后的图像数据。在一些实施例中,终端设备140可以从成像设备110获取图像数据,并将图像数据传输给处理设备130进行图像处理。所述终端设备140可以包括一个或多个输入设备、控制面板等。例如,所述输入设备可以包括键盘、触摸屏、鼠标、语音输入设备、扫描设备、信息识别设备(如人眼识别系统、指纹识别系统、脑监控系统等)、远程控制器等。
控制及处理系统100可以与网络150连接。所述网络150可以是无线网络,移动网络、有限网络或其它连接。其中,无线网络可以包括Bluetooth○R、WLAN、Wi-Fi、WiMax等。移动网络可以包括2G信号、3G信号、4G信号等。有线网络可以包括局域网(LAN)、广域网(WAN)、专有网络等。
控制及处理系统100中的数据库120和处理设备130可以通过云计算平台执行操作指令。云计算平台可以包括以数据存储为主的存储型云平台,以数据处理为主的计算型云平台以及计算和数据存储处理兼顾的综合云计算平台。例如,控制及处理系统100所产生的一些图像数据可以由云计算平台计算或存储。
需要注意的是,以上对控制及处理系统100的描述,仅为描述方便,并不能把本申请限制在所列举的实施例范围之内。
图2是根据本申请的一些实施例所示的处理设备的一种示例系统配置的示意图。如图2所示,处理设备130可以包括一个数据总线210、一个处理器220、一个只读存储器(ROM)230、一个随机存储器(RAM)240、一个通信端口250、一个输入/输出端口260、一个硬盘270和一个与输入/输出端口260相连的显示器280。所述处理设备130内各硬件之间的连接方式可以是有线的、无线的或两者的结合。任何一个硬件都可以是本地的、远程的或两者的结合。
数据总线210可以用于传输数据信息。在一些实施例中,处理设备130内各硬件 之间可以通过所述数据总线210进行数据的传输。例如,处理器220可以通过所述数据总线210将数据发送到存储器或输入/输出端口260等其它硬件中。需要注意的是,所述数据可以是真正的数据,也可以是指令代码、状态信息或控制信息。在一些实施例中,数据总线210可以为工业标准(ISA)总线、扩展工业标准(EISA)总线、视频电子标准(VESA)总线、外部部件互联标准(PCI)总线等。
处理器220可以用于逻辑运算、数据处理和指令生成。在一些实施例中,处理器220可以从内部存储器中获取数据/指令,所述内部存储器可以包括只读存储器(ROM)、随机存储器(RAM)、高速缓冲存储器(Cache)(在图中未示出)等。在一些实施例中,处理器220可以包括多个子处理器,所述子处理器可以用于实现系统的不同功能。
只读存储器230用于处理设备130的加电自检、处理设备130中各功能模块的初始化、处理设备130的基本输入/输出的驱动程序等。在一些实施例中,只读存储器可以包括可编程只读存储器(PROM)、可编程可擦除只读存储器(EPROM)等。随机存储器240用于存放操作系统、各种应用程序、数据等。在一些实施例中,随机存储器240可以包括静态随机存储器(SRAM)、动态随机存储器(DRAM)等。
通信端口250用于连接操作系统与外部网络,实现它们之间的通信交流。在一些实施例中,通信端口250可以包括FTP端口、HTTP端口或DNS端口等。输入/输出端口260用于外部设备或电路与处理器210之间进行数据、信息的交换和控制。在一些实施例中,输入/输出端口260可以包括USB端口、PCI端口、IDE端口等。
硬盘270用于存储处理设备130所产生的或从处理设备130外所接收到的信息及数据。在一些实施例中,硬盘270可以包括机械硬盘(HDD)、固态硬盘(SSD)或混合硬盘(HHD)等。显示器280用于将系统130生成的信息、数据呈现给用户。在一些实施例中,显示器280可以包括一个物理显示器,如带扬声器的显示器、LCD显示器、LED显示器、OLED显示器、电子墨水显示器(E-Ink)等。
图3是根据本申请的一些实施例所示的用于实施本申请中一些特定系统的一种示例移动设备示意图。如图3所示,移动设备350可以包括一个终端设备150。在一些实施例中,用户可以通过移动设备350接收或发送与控制及处理系统100相关的信息。移动设备350可以包括智能手机、个人数码助理(PDA)、平板电脑、掌上游戏机、智能眼镜、智能手表、可穿戴设备、虚拟现实设备或显示增强设备等中的一种或多种。在一些实施例中,移动设备350可以包括一个或多个中央处理器(CPUs)358、 一个或多个图像处理器(GPUs)356、一个显示器354、一个内存362、一个通讯平台352、一个存储器368和一个或多个输入/输出设备360。进一步地,移动设备350还可以包括一个系统总线、一个控制器等。如图3所示,CPU可以从存储模块368将移动设备操作系统(例如,iOS、Android、Windows Phone等)364和一个或多个应用366下载到内存362中。所述一个或多个应用366可以包括一个网页或其它用于接收和传递与控制及处理系统100相关的信息的移动应用软件(App)。用户可以通过输入/输出设备360获取或提供信息,所述信息可以进一步传输给控制及处理系统100和/或系统内的设备单元。
在本申请的实施例中,计算机硬件平台可以用作一个或多个元件(例如,控制及处理系统100及其内部的其它部分)的硬件平台,实施各种模块、单元以及它们的功能。所述硬件元件、操作系统和编程语言本质上都是传统的,本领域技术人员有可能将这些技术改编并应用于心脏图像模型建立和边缘分割。具有用户界面的计算机可以作为个人电脑(PC)、其它工作站或终端设备,适当编程的计算机也可以作为服务器。由于本领域技术人员对本申请中所使用的计算机设备的结构、编程和一般操作应该都很熟悉,因此,不再针对其它附图作相关具体解释。
图4是根据本申请的一些实施例所示的示例处理设备的示意图。处理设备130可以包括一个获取模块410、一个图像重建模块420、一个存储模块430、一个模型构建模块430、一个训练模块450、一个匹配模块460和一个调整模块470。所述处理设备130内各模块之间的连接方式可以是有线的、无线的或两者的结合。任何一个模块都可以是本地的、远程的或两者的结合。
存储模块430可以用于存储图像数据或信息,其功能可以通过图2中硬盘270、只读存储器230、随机存储器240等中的一种或多种的组合来实现。存储模块430可以存储处理设备130中其他模块或处理设备130之外的模块或设备的信息。存储模块430存储的信息可以包括成像设备110的扫描数据、用户输入的控制命令或参数信息、处理设备130中处理部分生成的中间数据或完整数据信息等。在一些实施例中,存储模块430可以将存储的信息发送给处理部分进行图像处理。在一些实施例中,存储模块430可以存储处理部分生成的信息,例如实时计算数据。存储模块430可以包括但不限于常见的各类存储设备如固态硬盘、机械硬盘、USB闪存、SD存储卡、光盘、随机存储器(RAM)或只读存储器(ROM)等。存储模块430可以是系统内部的存储设备,也可以是系统外 部或外接的存储设备,如云存储服务器上的存储器。
获取模块410可以用于获取成像设备110采集的图像数据,数据库120存储的图像数据,或控制及处理系统100外部的数据,其功能可以通过图2中的处理器220来实现。所述图像数据可以包括成像设备110采集的图像数据、处理图像的算法、样本、模型、参数、处理过程中的实时数据等。在一些实施例中,获取模块410可以将获取到的图像数据或信息发送给图像重建模块420进行处理。在一些实施例中,获取模块410可以将获取到的处理图像的算法、参数等信息发送给模型构建模块440。在一些实施例中,获取模块410可以将获取到的图像数据或信息发送给存储模块370进行存储。在一些实施例中,获取模块410可以将获取到的样本、参数、模型、实时数据等信息发送给训练模块450、匹配模块460或调整模块470。在一些实施例中,获取模块410可以接收来自处理器220的一个数据获取指令,并完成相应的数据获取操作。在一些实施例中,获取模块410可以在获取图像数据或信息后对其进行预处理。
图像重建模块420可以用于构建一个医学影像,其功能可以由图2中的处理器220来实现。在一些实施例中,图像重建模块420可以从获取模块410或存储模块430中获取图像数据或信息,并根据所述图像数据或信息构建所述医学影像。所述医学影像可以是一个人体三维医学影像。所述图像数据可以包括不同时间、不同位置、不同角度的扫描数据。根据所述扫描数据,图像重建模块420可以计算出人体对应部位的特征或状态,如人体对应部位对射线的吸收能力、人体对应部位组织的密度等,从而构建出所述人体三维医学影像。进一步地,所述人体三维医学影像可以通过显示器280进行显示,或者通过存储模块430进行存储。在一些实施例中,所述人体三维医学影像也可以作为重建后的待处理图像发送给模型构建模块440进一步处理。
模型构建模块440可以用于建立目标物体的三维平均模型。在一些实施例中,所述目标物体可以是心脏,所述三维平均模型可以是基于多套参考模型构建的心脏腔室三维平均网格模型。在一些实施例中,模型构建模块440可以通过获取模块410、存储模块430、或用户输入的方式获取至少一个心脏腔室的参考模型及与参考模型相关的信息。所述与参考模型相关的信息可以包括图像的尺寸、像素、像素的空间位置等。在一些实施例中,模型构建模块440可以根据获取的至少一个心脏腔室的参考模型及与参考模型相关的信息对参考模型进行配准等预处理,使得所有参考模型的方向、比例等一致。所述预处理后的图像可以进一步通过手动或处理器自动标注的方式来标注腔室边缘,将心脏参考模型划分 成数个子心脏腔室,并根据各个腔室的边缘点数据构建心脏腔室平均网格模型。模型构建模块440可以将构建的心脏腔室平均网格模型发送给存储模块430进行存储,也可以发送给训练模块450或匹配模块460进行进一步操作。在一些实施例中,模型构建模块440还可以根据多套参考模型数据确定平均模型上各个腔室之间的关系。例如,模型构建模块440可以构建关联因子矩阵,所述关联因子矩阵可以表示各个腔室对某一个或多个边缘数据点的影响。通过构建关联因子矩阵,可以改善腔室边界分离情形。模型构建模块440可以将构建的关联因子矩阵发送给存储模块430进行存储,也可以发送给匹配模块460或调整模块470用于运算处理。
训练模块450可以用于训练分类器。训练模块450可以将可能的边缘点划分到不同腔室类别中。例如,训练模块450可以将参考模型边缘附近一定范围的数据点分别划分到左心室、左心房、右心室、右心房、左心肌或主动脉六个腔室类别中。又例如,训练模块450可以基于腔室边缘的变化程度将参考模型边缘附近一定范围的数据点分别划分到左心室边缘、左心房锐利边缘、左心房非锐利边缘、右心室锐利边缘、右心室非锐利边缘、右心房锐利边缘、右心房非锐利边缘、主动脉边缘、左心肌锐利边缘和左心肌非锐利边缘10个腔室类别中。在一些实施例中,训练模块450可以通过存储模块430、模型构建模块440、或用户输入的方式获取至少一个心脏腔室的参考模型及与该参考模型相关的信息。所述与参考模型相关的信息可以包括参考模型中各个腔室的边缘点数据等。在一些实施例中,训练模块450可以根据腔室边缘附近的点与腔室边缘的距离将腔室边缘附近的点划分为正样本和负样本。在一些实施例中,所述正样本可以包括距离腔室边缘一定阈值范围内的数据点,所述负样本可以包括距离边缘较远以及空间中其它随机位置的数据点。在一些实施例中,训练模块450可以根据正负样本点训练参考模型或平均模型上腔室边缘附近的点,并获得一个或多个分类器。例如,训练模块450可以获取点分类器。所述点分类器对所述第一边缘的多个点根据图像特征进行分类。所述图像特征可以与锐利程度和所处位置相关。又例如,训练模块450可以获取第一分类器。所述第一分类器可以与所述点分类器相关。在一些实施例中,所述第一分类器可以根据点分类器分类后的多个点,将第一边缘一定范围内的点分类器分类后的多个点作为正样本,将第一边缘一定范围以外的点分类器分类后的多个点作为负样本。然后,对所述正样本和负样本进行分类,并根据分类后的正样本和负样本获得训练后的第一分类器。在一些实施例中,训练模块450可以利用Probabilistic Boosting-Tree(PBT)训练分类器。所述PBT可以包括两级PBT算法或多级 PBT算法。训练模块450可以将训练好的分类器发送给存储模块430进行存储,也可以发送给调整模块470用于运算处理。
匹配模块460可以用于将待处理图像与模型构建模块440建立的平均模型进行匹配,构建与待处理图像对应的三维网格模型。所述待处理图像可以从图像重建模块420或存储模块430获取。在一些实施例中,匹配模块460可以通过霍夫变换等方法将平均模型匹配到待处理图像上,获得与待处理图像粗略匹配后的心脏腔室三维网格模型。匹配模块460可以通过获取模块410、存储模块430、或用户输入的方式获取所述霍夫变换所需要的参数等信息。匹配模块460可以将匹配后的心脏腔室三维网格模型发送给存储模块430进行存储,也可以发送给调整模块470进一步优化处理。
调整模块470可以用于优化模型,使模型更接近于真实的心脏(待处理心脏图像数据)。调整模块470可以从匹配模块460或存储模块430获取粗略匹配后的心脏腔室网格模型。在一些实施例中,调整模块470可以根据匹配后所得心脏模型上腔室边缘一定范围的数据点属于腔室边缘的概率确定最理想的心脏腔室边缘。调整模块470可以进一步精确调整心脏腔室三维网格模型。所述精确调整可以包括相似性变换、分段仿射变换和/或基于能量函数的微变等。在一些实施例中,调整模块470可以将精确调整所得的心脏腔室三维网格模型进行图像形式转换,得到心脏腔室边缘分割图(如图26所示)。调整模块470可以将精确调整后的心脏腔室模型或心脏腔室分割图发送给存储模块430进行存储,也可以发送给显示器280进行显示。
需要注意的是,上述对于处理设备130的描述,仅为描述方便,并不能把本申请限制在所举实施例范围之内。可以理解,对于本领域的技术人员来说,在了解该设备的工作原理后,可能在不背离这一原理的情况下,对各个模块进行任意组合,或者构成子系统与其他模块连接,对实施上述设备的形式和细节上作各种修正和改变。例如,模型构建模块440和/或训练模块450可以去掉,或者与存储模块430合并。诸如此类的变形,均在本申请的保护范围之内。
图5是根据本申请的一些实施例所示的实施处理设备的示例性流程图。在步骤510中,可以获取图像数据。在一些实施例中,步骤510可以通过获取模块410实现。所述图像数据可以从成像设备110、数据库120或控制及处理系统100外部获得。所述图像数据可以包括CT、正电子放射层析成像技术(PET)、单光子辐射断层摄影(SPECT)、MRI(磁共振成像技术)、超声(Ultrasound)及其它医学影像设备采集的原始图像数据。在一 些实施例中,所述图像数据可以是心脏或心脏的局部原始图像数据。在一些实施例中,步骤510可以包括对获取的心脏原始图像数据进行预处理,并将预处理后的原始图像数据发送给处理设备130中的图像重建模块420或存储模块430。所述预处理可以包括图像的畸变矫正、去噪、平滑、增强等。
在步骤520中,可以根据心脏图像数据重新构建心脏图像。该步骤可以由处理设备130中的图像重建模块420基于图像重建技术完成。所述心脏图像数据可通过获取模块410或存储模块430获取。所述心脏图像可以包括全方位数字化心脏图、数字化心脏层析X射线图、心脏相衬图、X射线心脏影像(CR)图、多模态心脏图像等。所述心脏图像可以是二维图像或三维图像。所述心脏图像的格式可以包括JPEG格式、TIFF格式、GIF格式、FPX格式等。所述图像重建技术可以包括解联立方程组法、傅里叶变换重建法、直接反投影重建法、滤波反投影重建法、傅立叶反投影重建法或卷积逆投影重建法、迭代重建法等。在一些实施例中,步骤520可以对获取的心脏图像数据进行预处理,并获得多个心脏截面图或投影图。在一些实施例中,获取的心脏图像数据或预处理后的心脏图像数据可以包括多个心脏截面图。图像重建模块420可以根据一系列心脏截面图提供的信息重新构建心脏图像或模型。所述心脏截面图提供的信息可以包括心脏不同部位的组织密度、对射线的吸收能力等信息。重新构建的心脏图像可以通过显示器280进行显示,或者通过存储模块430进行存储。重新构建的心脏图像也可以由处理设备130中的模型构建模块440进行进一步图像处理。
在步骤530中,可以构建一个三维心脏平均网格模型。该步骤可以由处理设备130中的模型构建模块440根据多个参考模型完成。步骤530可以通过模块410、存储模块430或用户输入的方式获取多个参考模型。在一些实施例中,步骤530可以包括对多个参考模型进行图像配准。所述图像配准可以包括基于灰度图像配准、基于变换域图像配准、基于特征图像配准等。其中,特征可以包括特征点、特征区域、特征边缘等。在一些实施例中,所述多个参考模型可以是经过用户标注过腔室边缘的心脏腔室分割数据或参考模型。所述平均模型可以包括Point Distribution Model(PDM)、Active Shape Model(ASM)、Active Contour Model(也称为Snakes)、Active Appearance Model(AAM)等计算得到的模型。在一些实施例中,步骤530可以包括根据多个参考模型上的腔室边缘数据确定所构建的平均模型上各个腔室之间的关系,并建立关联因子二维矩阵。在一些实施例中,三维心脏平均网格模型或含有关联因子信息的平均模型可以由处理器220直接发送给处理设备130中 的存储模块430进行存储,或发送给匹配模块460进一步处理。
在步骤540中可以将心脏图像数据与三维心脏平均网格模型进行匹配。进一步地,所述匹配可以包括将心脏图像数据中第一边缘与三维心脏平均网格模型的第二边缘匹配。在一些实施例中,所述第一边缘可以包括一个外边缘和一个内边缘。所述外边缘可以是心脏外部轮廓,所述内边缘可以是心脏内部腔室轮廓,外部轮廓和内部腔室轮廓间可以由心脏组织填充。在一些实施例中,对应于第一边缘,所述三维心脏平均网格模型的第二边缘也可以包括一个外边缘和一个内边缘。所述第二边缘外边缘对应心脏外轮廓的边缘,所述第二边缘内边缘对应心脏内部腔室轮廓的边缘。所述外边缘和内边缘可以分别指用于粗略匹配和精确匹配的边缘,当本申请披露的方法使用于其他物体、器官或组织时,所述外边缘和内边缘不一定有几何上的内外关系。例如,对于某些物体、器官或组织,用于粗略匹配的边缘可能在用于精确匹配的边缘的外侧、内侧或同侧。又例如,所述用于粗略匹配的边缘可以与所述用于精确匹配的边缘可以有重叠或者交叉。在一些实施例中,所述心脏图像数据与三维心脏平均模型的匹配可以为心脏图像数据第一边缘的外边缘与三维心脏平均网格模型第二边缘的外边缘的匹配。在一些实施例中,步骤540可以由匹配模块460通过图像匹配方法完成。所述图像匹配方法可以包括基于NNDR的匹配方法、邻近特征点的搜索算法、基于霍夫变换的目标检测等。在一些实施例中,可以通过广义霍夫变换将模型构建模块440建立的心脏平均模型匹配到图像重建模块420处理得到的心脏图像数据上的第一边缘上,并得到匹配后的心脏模型。在一些实施例中,所述第一边缘上具有多个点,根据图像特征对第一边缘的多个点进行分类,获取点分类器。所述图像特征可以与锐利程度和所处位置相关。在一些实施例中,可以基于待匹配心脏图像数据上各点属于边缘的概率实施加权广义霍夫变换。所述概率可以根据训练模块450训练好的第一分类器,将待匹配的心脏图像数据上的每个点输入分类器计算得到。所述第一分类器可以基于点分类器获取。所述点分类器可以根据图像特征通过对第一边缘的多个点进行分类获取。在一些实施例中,可以根据所得待匹配心脏上各点的概率,构建一个待匹配心脏的边缘概率图。所述边缘概率图可以包括灰度梯度图、彩色梯度图(如图24所示)等。在一些实施例中,在计算待匹配心脏图像数据上的每个点作为边缘的概率前,可以对该心脏图像进行预处理。例如,可以将完全不可能是心脏边缘的部位排除,从而减小分类器的计算量。例如,对于CT图像而言,肌肉组织的CT值一般大于-50,那么可以将CT值小于-50的部位通过掩膜标记,使分类器不需要计算该部位的点。在一些实施例中,处理设备130中的 匹配模块460可以将匹配后的心脏模型或三维心脏网格模型发送给存储模块430进行存储,也可以发送给调整模块470进一步优化处理。
在步骤550中,可以获得精确调整后的心脏腔室分割图。该步骤可以由处理设备130中的调整模块470完成。在一些实施例中,调整模块470可以调整模型上的腔室边缘点(第二边缘的内边缘),以达到与心脏图像数据中第一边缘的内边缘匹配。在一些实施例中,步骤550可以根据匹配后三维心脏网格模型上的腔室边缘确定边缘目标点。在一些实施例中,可以根据匹配后三维心脏网格模型上的腔室边缘一定范围内的第二边缘点的概率确定边缘目标点。在一些实施例中,所述概率可以使用基于第二边缘点训练的第二分类器计算。在一些实施例中,所述概率可以调用基于多个参考模型或平均模型训练的第一分类器计算。在一些实施例中,步骤550可以基于确定的边缘目标点对三维心脏网格模型进行变形,从而得到腔室边缘进一步调整后的三维心脏网格模型。所述变形可以包括相似性变换、仿射变换及其它图像微变形方法等。例如,在一些实施例中,可以基于确定的边缘目标点依次进行相似性变换、分段仿射变换和/或基于能量函数的微变。在一些实施例中,处理设备130中的调整模块470可以将调整后的三维心脏网格模型通过掩膜(mask)转化为心脏腔室分割图像(如图26所示)。所述腔室分割图像的不同腔室可以用不同的颜色标注。在一些实施例中,处理设备130中的调整模块470可以将精确调整后的心脏腔室模型或心脏腔室分割图发送给存储模块430进行存储,也可以发送给显示器280进行显示。
需要注意的是,上述对于处理设备130进行腔室分割过程的描述,仅为描述方便,并不能把本申请限制在所举实施例范围之内。可以理解,对于本领域的技术人员来说,在了解该设备的工作原理后,可能在不背离这一原理的情况下,对各个步骤的顺序进行任意调整,或者添加删除某些步骤。例如,构建平均模型530的步骤可以去掉。又例如,调整模块470可以只对网格模型进行上述几种变形中的一种或两种,或者采用其他形式的微变。诸如此类的变形,均在本申请的保护范围之内。
图6是根据本申请的一些实施例所示的示例模型构建模块的示意图。模型构建模块440可以包括一个获取单元610、一个配准单元620、一个标注单元630、一个模型生成单元640和一个关联因子生成单元650。所述模型构建模块440内各模块之间的连接方式可以是有线的、无线的或两者的结合。任何一个模块都可以是本地的、远程的或两者的结合。
获取单元610可以用于获取多个参考模型。获取单元610可以通过数据库120、 控制及处理系统100外的存储设备或用户输入的方式获取上述信息。其功能可以通过图2中的处理器220来实现。在一些实施例中,所述多个参考模型可以包括一个病人在不同时间、不同位置、不同角度扫描的心脏图像数据。在一些实施例中,所述多套心脏数据可以包括不同病人在不同位置、不同角度扫描的心脏图像数据。一些实施例中,获取单元610也可以用于获取建模算法、参数等信息。获取单元610可以将获取的多个参考模型和/或其它信息发送给配准单元620、标注单元630、平均模型生成单元640或关联因子生成单元650。
配准单元620可以用于通过图像配准方法调整获取单元610所获取的多个参考模型,并使多个参考模型的位置、比例等一致。所述图像配准可以包括基于空间维数配准、基于特征配准、基于变换性质配准、基于优化算法配准、基于图像模态配准、基于主体配准等。在一些实施例中,配准单元620可以将多个参考模型配准到一个相同的坐标系中。配准单元620可以将配准后的多个参考模型发送给存储模块430进行存储,也可以发送给标注单元630和/或平均模型生成单元640进一步处理。
标注单元630可以用于标注多个参考模型的腔室边缘多个数据点(亦可称为点集)。所述心脏图像或模型可以是配准单元620进行图像配准后的多个参考模型,也可以是平均模型生成单元640构建的平均模型。例如,腔室边缘可以由用户在配准单元620进行图像配准后的多个参考模型上手动标注。又例如,腔室边缘可以由标注单元630根据明显不同的腔室边缘特征自动标注。在一些实施例中,标注单元630可以将多个参考模型中的整个心脏图像或模型按照腔室划分成六个部分,分别为左心室、左心房、右心室、右心房、心肌以及主动脉。在一些实施例中,标注单元630可以根据参考模型上腔室边缘的变化程度(亦被称为梯度),将多个参考模型上的整个心脏图像或模型划分为锐利类和非锐利类。具体地,标注单元630可以把几个腔室的边缘点与外部相连或与外部变化程度较小的标记为锐利类,将与内部其它腔室相连或与外部变化程度较大的标记为非锐利类,如图17中的两个箭头所示。例如,标注单元630可以将多个参考模型上的整个心脏图像或模型划分为10个类别:左心室边缘、左心房锐利边缘、左心房非锐利边缘、右心室锐利边缘、右心室非锐利边缘、右心房锐利边缘、右心房非锐利边缘、主动脉边缘、左心肌锐利边缘和左心肌非锐利边缘(如图18所示)。在一些实施例中,标注单元630可以将多个参考模型配准至一个相同坐标系中,通过比较多个参考模型与平均模型生成单元640所得的平均模型上各点的位置,标注多个参考模型上的腔室边缘。例如,标注单元630可以将平均模 型上与参考模型上对应点距离最近的点所属的类别作为参考模型上该点的类别。标注单元630可以将标注有腔室边缘点集的多个参考模型发送给存储模块430进行存储,也可以发送给训练模块450、平均模型生成单元640和/或关联因子生成单元650进一步处理或用于计算。
平均模型生成单元640可以用于构建三维心脏平均网格模型。在一些实施例中,平均模型生成单元640可以提取标注后的多个参考模型或平均模型中的腔室边缘,通过对每个参考模型或平均模型中的腔室边缘模型进行处理获得多个参考网格模型,并通过图像模型构建方法计算得到平均网格模型。所述图像模型构建方法可以包括Point Distribution Model(PDM)、Active Shape Model(ASM)、Active Contour Model(也称为Snakes)、Active Appearance Model(AAM)等。在一些实施例中,平均模型生成单元640可以将腔室标注后的整个心脏平均模型划分成六个独立的或者相互结合的子模型。例如,左心室模型、左心房模型、右心室模型、右心房模型、左心肌模型和主动脉模型等(如图20所示)。在一些实施例中,平均模型生成单元640可以提取多个腔室边缘,并确定多个腔室边缘上的控制点分布,通过连接控制点形成网络。在一些实施例中,平均模型生成单元640可以基于网格模型通过ASM建模方法得到心脏腔室的平均网格模型,以及相应的特征值、特征向量等模型参数。在一些实施例中,平均模型生成单元640可以在平均模型计算中加入关联因子对控制点的影响。例如,在ASM计算中,平均模型生成单元640可以利用加权平均(即Σ(Fi*Wi))来计算控制点的调整结果,其中,Fi为某个腔室的变形参数,Wi为该腔室对控制点的影响系数或权重值。通过所述基于关联因子的加权平均计算可以使得模型上控制点的调整受到多个腔室结果的影响,从而达到关联多个腔室的目的。平均模型生成单元640可以将得到的三维心脏平均网格模型发送给存储模块430进行存储或关联因子生成单元650用于计算。平均模型生成单元640也可以将得到的三维心脏平均网格模型发送给训练模块450和/或匹配模块460进一步处理。
关联因子生成单元650可以用于建立各腔室和平均网格模型上控制点的关系。在一些实施例中,所述关系可以是腔室和控制点作为行列的二维关联因子矩阵,矩阵的值可以表示各腔室对各控制点的影响系数或权重。在一些实施例中,所述矩阵的值可以是0-1之间的任意实数。例如,示例的二维关联因子矩阵可以如下所示:
  左心室 左心房 右心室 右心房 心肌 主动脉
控制点1 1 0 0 0 0 0
控制点2 0.8 0.2 0 0 0 0
其中,控制点1属于左心室,但不处于左心室和心房的连接部分,所以只有左心室的影响系数为1,其它腔室的影响系数均为0。控制点2属于左心室,但它同时位于左心室和左心房的连接部分,所以左心室对它的影响系数为0.8,左心房的影响系数为0.2。
在一些实施例中,关联因子生成单元650可以根据网格模型上控制点的腔室归属,以及控制点与其它腔室的位置关系,建立关联因子矩阵。在一些实施例中,关联因子生成单元650可以根据控制点与其它腔室的距离计算关联因子的影响范围或影响系数。例如,关联因子生成单元650可以通过控制点距离其它腔室的最大距离控制关联因子影响系数的计算。在一些实施例中,关联因子生成单元650可以根据各腔室间的紧密程度来调整不同腔室间的影响范围和影响系数。如图21所示,网格控制点模型中,浅色的控制点表示只受到所在腔室的影响,而深色的腔室连接处则表示控制点受到多个连接的腔室影响,其中颜色越深代表受其他腔室影响越大。关联因子生成单元650可以将得到的二维关联因子矩阵发送给存储模块430进行存储,也可以发送给平均模型生成单元640和/或调整模块470用于加权计算。
需要注意的是,上述对于模型构建模块440的描述,仅为描述方便,并不能把本申请限制在所举实施例范围之内。可以理解,对于本领域的技术人员来说,在了解该模块的工作原理后,可能在不背离这一原理的情况下,对该模块中各个单元进行任意组合,或者构成子系统与其他单元连接,对实施上述模块的形式和细节上作各种修正和改变。例如,配准单元620和/或标注单元630可以去掉,或者与获取单元610、存储模块430合并。又例如,所述多个参考模型或平均模型可以包括已经由用户进行边缘标注的心脏数据或模型。又例如,所述多个参考模型或平均模型可以包括已经进行粗略的或精细的腔室分割的心脏数据。诸如此类的变形,均在本申请的保护范围之内。
图7是根据本申请的一些实施例所示的构建平均模型的示例性流程图。在步骤710中,可以获取多个心脏参考模型。所述多个心脏参考模型可以通过数据库120、用户输入或控制及处理系统100外的存储设备获取。在一些实施例中,所述多个心脏参考模型可以包括一个病人在不同时间、不同位置、不同角度扫描的心脏图像数据。在一些实施例中, 所述多个心脏参考模型可以包括不同病人在不同位置、不同角度扫描的心脏图像数据。在一些实施例中,所述多个心脏参考模型可以包括已经由专家进行边缘标注的心脏数据或模型。在一些实施例中,所述多个心脏参考模型可以包括已经进行粗略的或精细的腔室分割的心脏数据。
在步骤720中,可以对获取的多个参考模型进行图像配准。该步骤可以由模型构建模块440中的配准单元620完成。在一些实施例中,可以通过平移、旋转、缩放等方式将任意两个参考模型变换到同一坐标系中,并使得上述两个参考模型中对应于空间同一位置的点一一对应,从而实现信息融合。在一些实施例中,所述图像配准可以包括基于空间维数配准、基于特征配准、基于变换性质配准、基于优化算法配准、基于图像模态配准、基于主体配准等。其中,所述基于空间维数配准可以包括2D/2D配准、2D/3D配准或3D/3D配准。所述基于特征配准可以包括基于特征点(例如不连续点、图形的转折点、线交叉点等)配准、基于面区域(例如曲线、曲面等)配准、基于像素值配准、基于外部特征配准等。所述基于变换性质配准可以包括基于刚性变换配准、基于仿射变换配准、基于投影变换配准和/或基于曲线变换配准等。所述基于优化算法配准可以包括基于梯度下降法配准、基于牛顿法配准、基于Powell法配准、基于遗传算法配准等。所述基于图像模态配准可以包括基于单模态配准和/或基于多模态配准。所述基于主体配准可以包括基于来自同一病人的图像配准、基于来自不同病人图像配准和/或基于病人数据和图谱的配准。
在步骤730中,可以在配准后的多个参考模型上标注腔室边缘。该步骤730可以由模型构建模块440中的标注单元630完成。在一些实施例中,可以通过由用户在多个心脏参考模型上手动标注腔室边缘点,每个参考模型上形成的边缘点集可以将心脏划分成六个部分,分别为左心室、左心房、右心室、右心房、心肌和主动脉。在一些实施例中,可以按照腔室边缘相对于外部和内部的变化程度,将心脏划分为10个类别:左心室边缘、左心房锐利边缘、左心房非锐利边缘、右心室锐利边缘、右心室非锐利边缘、右心房锐利边缘、右心房非锐利边缘、主动脉边缘、左心肌锐利边缘和左心肌非锐利边缘(如图18所示)。所述锐利边缘可以指腔室的边缘与外部相连或变化不明显。所述非锐利的可以指腔室的边缘与内部或其它腔室相连或变化明显。
在步骤740中,可以确定多个参考模型上的控制点。该步骤可以由模型构建模块440中的平均模型生成单元640根据经过图像配准和腔室边缘标注的多个参考模型完成。在一些实施例中,可以根据多个参考模型的图像配准结果和腔室边缘标注信息确定每个腔 室的轴。所述轴可以是腔室上任意指定的两点的连线方向。例如,所确定的轴可以是腔室上距离最远的两点的连线构成的长轴。在一些实施例中,可以分别提取多个参考模型标注后的腔室边缘,沿各个腔室上所确定的轴线的横截面方向对各个腔室进行切片,并根据横截面和曲面特征在切片边缘形成密集的点集,构成平均模型的点模型(如图19所示)。在一些实施例中,可以根据点模型确定各个腔室上的控制点。所述控制点可以是点模型上点集的子集。例如,所述子集越大,网格模型越大,心脏分割过程中的计算量越大,分割效果越好;所选用的子集越小,网格模型越小,心脏分割过程中的计算量越小,分割速度较快。在一些实施例中,腔室上控制点的数目可以变化。例如,在粗略分割阶段,控制点数目可以较少,从而快速定位到腔室边缘;在精细分割阶段,控制点数目可以较多,从而实现腔室边缘的精细分割。
在步骤750中,可以根据控制点构建心脏平均网格模型。在一些实施例中,步骤750可以根据控制点之间的关系将不同点连接成多边形网络。例如,在一些实施例中,可以通过连接相邻切片上的邻近控制点形成三角形网络。在一些实施例中,可以通过图像变形方法获得平均网格模型。所述图像变形方法可以包括Point Distribution Model(PDM)、Active Shape Model(ASM)、Active Contour Model(也称为Snakes)、Active Appearance Model(AAM)等。例如,在一些实施例中,可以基于控制点构建的三角形网络通过ASM计算方法得到多个心脏参考模型的平均网格模型(如图20所示)。在一些实施例中,步骤750可以基于二维关联因子矩阵对控制点网格模型进行加权平均模型计算。例如,在ASM计算中,平均模型生成单元640可以利用加权平均(即Σ(Fi*Wi))来计算控制点的调整结果,其中,Fi为某个腔室的变形参数,Wi为该腔室对控制点的影响系数或权重值。
需要注意的是,上述对于模型构建模块440构建平均模型的过程的描述,仅为描述方便,并不能把本申请限制在所举实施例范围之内。可以理解,对于本领域的技术人员来说,在了解该模块的工作原理后,可能在不背离这一原理的情况下,对各个步骤的顺序进行任意调整,或者添加删除某些步骤。例如,步骤710和步骤720可以合并。又例如,步骤730到步骤750可以循环多次。诸如此类的变形,均在本申请的保护范围之内。
图8是根据本申请的一些实施例所示的示例训练模块的示意图。训练模块450可以包括一个分类单元810和一个分类器生成单元820。所述模型构建模块440内各模块之间的连接方式可以是有线的、无线的或两者的结合。任何一个模块都可以是本地的、远程的或两者的结合。
分类单元810可以用于将多个参考模型或平均模型上的可能腔室边缘点划分到不同的腔室类别中。该功能可以通过处理器220实现。在一些实施例中,分类单元810可以根据标注单元630划分的腔室类别对参考模型或平均模型上可能的边缘点进行分类(如图22所示)。例如,分类单元810可以将参考模型或平均模型上腔室附近可能的边缘点划分到10个腔室类别中,分别是:左心室边缘、左心房锐利边缘、左心房非锐利边缘、右心室锐利边缘、右心室非锐利边缘、右心房锐利边缘、右心房非锐利边缘、主动脉边缘、左心肌锐利边缘和左心肌非锐利边缘。所述分类可以通过多种分类方法实现,包括但不限于决策树分类算法、贝叶斯(Bayes)分类算法、人工神经网络(ANN)分类算法、k-邻近(kNN)、支持向量机(SVM)、基于关联规则的分类算法、集成学习分类算法等。在一些实施例中,分类单元810可以根据腔室边缘附近的点与腔室边缘的距离将腔室边缘附近的点划分为正样本和负样本。例如,所述正样本可以是距离腔室边缘一定阈值范围内的数据点,所述负样本可以是距离边缘较远以及空间中其它随机位置的数据点。在一些实施例中,分类单元810可以将多个参考模型或平均模型上可能边缘点的分类结果或数据发送给存储模块430进行存储,也可以发送给分类器生成单元820进一步处理。
分类器生成单元820可以用于获取训练好的分类器。在一些实施例中,分类器生成单元820可以根据分类单元810划分的边缘点类别对多个参考模型或平均模型上的边缘点进行分类器训练,并得到训练好的分类器(如图23所示)。在一些实施例中,分类器生成单元820可以利用PBT训练分类器。在一些实施例中,训练好的分类器可以在接收到任意一个坐标点后,输出该坐标点对应的概率。所述概率是指某一点作为腔室边缘的概率。在一些实施例中,分类器生成单元820可以将训练好的分类器发送给存储模块430进行存储,也可以发送给匹配模块460和/或调整模块470用于计算。
需要注意的是,上述对于训练模块450的描述,仅为描述方便,并不能把本申请限制在所举实施例范围之内。可以理解,对于本领域的技术人员来说,在了解该模块的工作原理后,可能在不背离这一原理的情况下,对该模块中各个单元进行任意组合,或者构成子系统与其他单元连接,对实施上述模块的形式和细节上作各种修正和改变。例如,分类单元810可以对多个参考模型或平均模型进行腔室划分,使划分后的腔室类别相对于标注划分的腔室类别更精细。诸如此类的变形,均在本申请的保护范围之内。
图9是根据本申请的一些实施例所示的训练分类器的示例性流程图。在步骤910,训练模块450中的分类单元810可以获取多个参考模型或平均模型中的样本点。在一些实 施例中,训练模块450可以基于标注后的多个参考模型或平均模型上的腔室分割结果提取腔室边缘(如图22所示),并将每个腔室边缘附近一定范围内的点作为正样本,距离腔室边缘较远以及空间中其它随机位置的点作为负样本。例如,所述腔室边缘一定范围可以是0.1cm、0.5cm、1cm、2cm等。
在步骤920,训练模块450中的分类单元810可以对获取的正负样本点进行分类。在一些实施例中,训练模块450可以按照分类方法将正负样本点添加到不同的腔室类别中。在一些实施例中,正样本可以是平均模型边缘一定范围内的点,负样本可以是一些平均模型边缘一定范围外的点。在一些实施例中,平均模型边缘的一定范围可以设置为零,此时正样本即为平均模型边缘点。在一些实施例中,正负样本可以基于锐利程度及样本点所处位置进行分类。在一些实施例中,样本点所处位置为正负样本所属腔室。例如,训练模块450可以根据标注的腔室类别,将正负样本点划分到10个腔室类别中:左心室边缘、左心房锐利边缘、左心房非锐利边缘、右心室锐利边缘、右心室非锐利边缘、右心房锐利边缘、右心房非锐利边缘、主动脉边缘、左心肌锐利边缘和左心肌非锐利边缘。所述分类方法可以包括决策树分类算法、贝叶斯(Bayes)分类算法、人工神经网络(ANN)分类算法、k-邻近(kNN)、支持向量机(SVM)、基于关联规则的分类算法、集成学习分类算法等。其中,决策树分类算法可以包括ID3、C4.5、C5.0、CART、PUBLIC、SLIQ、SPRINT算法等。贝叶斯分类算法可以包括朴素贝叶斯算法、TAN算法(tree augmented Bayes network))等。人工神经网络分类算法可以包括BP网络、径向基RBF网络、Hopfield网络、随机神经网络(例如Boltzmann机)、竞争神经网络(例如Hamming网络、自组织映射网络等)等。基于关联规则的分类算法可以包括CBA、ADT、CMAR等。集成学习分类算法可以包括Bagging、Boosting、AdpBoosting、PBT等。
在步骤930,训练模块450可以获取经过分类训练的分类器。在一些实施例中,训练模块450中的分类器生成单元820可以通过PBT算法训练上述样本点类别,并获得一个或多个训练好的分类器(如图23所示)。所述PBT可以包括两级PBT算法或多级PBT算法。在一些实施例中,所述分类器可以包括一个或多个以多个参考模型或平均模型边缘一定范围内的点为正样本训练得到的分类器(也称作“第一分类器”)。在一些实施例中,所述分类器可以包括一个或多个以待处理图像边缘一定范围内的点为正样本训练得到的分类器(也称作“第二分类器”)。
需要注意的是,上述对于训练模块450训练分类器的过程的描述,仅为描述方便, 并不能把本申请限制在所举实施例范围之内。可以理解,对于本领域的技术人员来说,在了解该模块的工作原理后,可能在不背离这一原理的情况下,对各个步骤的顺序进行任意调整,或者添加删除某些步骤。例如,步骤910和步骤920中可以不区分正样本和负样本,并直接对腔室边缘附近的所有点进行分类。又例如,正负样本点距离腔室边缘的最大距离可以是2cm。诸如此类的变形,均在本申请的保护范围之内。
图10是根据本申请的一些实施例所示的示例模型匹配模块的结构示意图。如图10所示,匹配模块460可以包括一个获取单元1010,一个图像点提取单元1020,一个霍夫变换单元1030和一个模型匹配单元1040。所述匹配模块460内各单元之间的连接方式可以是有线的、无线的或两者的结合。任何一个单元都可以是本地的、远程的或两者的结合。
获取单元1010可以获取图像。所述获取的图像为待处理图像。在一些实施例中,所述图像可以是基于图像数据重建的图像。所述重建的图像可以从处理设备130的其它模块中获取。例如,所述重建的图像可以是获取单元1010从图像重建模块420中获取。再例如,所述重建的图像可以是图像重建模块420重建图像后存储在存储模块430中的图像。在一些实施例中,所述图像可以是经由外部设备输入到系统中的图像。例如,外部设备通过通信端口250将图像输入到系统中。在一些实施例中,获取单元1010可以获取平均模型。所述平均模型可以是平均模型生成单元640生成的三维心脏平均网格模型。在一些实施例中,获取单元1010可以获取训练模块450训练好的第一分类器。所述第一分类器可以基于点分类器获取。所述图像特征可以与锐利程度和所处位置相关。
在一些实施例中,获取单元1010可以获取模型匹配模块460进行图像匹配时所需要的参数。例如,获取单元1010可以获取用于广义霍夫变换的参数。在一些实施例中,所述广义霍夫变换的参数可以基于三维平均网格模型及其腔室边缘控制点得到。例如,通过确定平均模型边缘的质心,计算平均模型边缘上所有控制点相对于质心的偏移量以及相对于质心的梯度方向,可以得到对应于各梯度方向的控制点的偏移量向量(下面称为梯度向量)。在一些实施例中,可以将平均模型置于x-y-z坐标系中,并确定各梯度向量在x-y-z坐标系下的坐标。在一些实施例中,各梯度向量的坐标可以转换为极坐标系下的坐标。具体地,可以将梯度向量在x-y平面的投影与x坐标轴的夹角作为第一个角度θ,取值范围为-180度到180度。可以将梯度向量与x-y平面的夹角作为第二个角度
Figure PCTCN2017083184-appb-000001
取值范围为-90度到90度。在一些实施例中,可以对上述表示梯度向量的 两个角度θ和
Figure PCTCN2017083184-appb-000002
进行离散化处理,获取如下所述表格(亦称为R-table)。在一些实施例中,可以将R-table上的偏移量进行缩放或者旋转不同的角度来检测不同大小或不同角度的形状。
Figure PCTCN2017083184-appb-000003
图像点提取单元1020可以获取待处理图像的边缘概率图。具体地,在一些实施例中,图像点提取单元1020可以通过将待处理图像上点的坐标输入获取单元1010获取的分类器中,计算得到待处理图像上各点作为腔室边缘的概率,并根据各点的概率分布得到待处理图像的边缘概率图。在一些实施例中,所述边缘概率图可以包括灰度梯度图、彩色梯度图(如图24所示)等。在一些实施例中,图像点提取单元1020可以将待处理图像边缘概率图上概率值大于一定阈值的点作为第一边缘点。所述阈值可以是0-1之间的任意实数,例如,0.3、0.5等。
模型匹配单元1030可以将平均模型匹配到待处理图像上。具体地,在一些实施例中,模型匹配单元1030可以通过加权广义霍夫变换将平均模型匹配到待处理图像的边缘概率图上。所述加权广义霍夫变换可以包括根据待处理图像上第一边缘点和R-table获取待处理图像上所有可能的边缘参考点,通过加权累加的方法求出所有边缘参考点的概率累加值,并将概率累加值最大的边缘参考点作为图像的质心。将模型质心到图像质心的变换参数作为模型的变换参数。所述边缘参考点可以通过待处理图像第一边缘点根据R-table中的参数进行坐标变换后得到。所述加权累加可以是将位于相同的边缘参考点(指第一边缘点根据R-table上的参数发生偏移后落到同一边缘参考点的行为)对应的第一边缘点概率累加的过程。根据获得的图像质心,可以依据变换参数将模型的质心变换至与图像质心重合的位置。所述变换参数可以包括旋转角度和缩放 比例等。在一些实施例中,模型匹配单元1030可以根据确定的变换参数对模型上的点进行旋转、缩放处理等,从而得到与待处理图像匹配的模型(如图25所示)。
需要注意的是,上述对于模型匹配模块460的描述,仅为描述方便,并不能把本申请限制在所举实施例范围之内。可以理解,对于本领域的技术人员来说,在了解该模块的工作原理后,可以在不背离这一原理的情况下,对该模块中各个单元进行任意组合,或者构成子系统与其他单元连接,对实施上述模块的形式和细节上作各种修正和改变。例如,图像点提取单元1020可以去掉,待处理图像的边缘概率图可以直接由训练模块450得到。诸如此类的变形,均在本申请的保护范围之内。
图11是根据本申请的一些实施例所示的匹配平均模型与重建的图像的示例性流程图。在步骤1110中,可以获取平均模型、待处理图像和训练好的第二分类器。在一些实施例中,所述平均模型可以是平均模型生成单元640基于多个参考模型通过图像模型构建方法得到的三维心脏平均网格模型。所述图像模型构建方法可以包括Point Distribution Model(PDM)、Active Shape Model(ASM)、Active Contour Model(也称为Snakes)、Active Appearance Model(AAM)等。步骤1110可以由获取单元1010实现。在一些实施例中,获取单元1010获取的待处理图像可以是图像重建模块420重建的图像。在一些实施例中,步骤1110可以获取基于平均模型的R-table。
在步骤1120中,可以确定广义霍夫变换的参数。具体地,在一些实施例中,步骤1110可以基于待处理图像的边缘概率图获取待处理图像的第一边缘点。所述第一边缘点可以是待处理图像边缘概率图上概率大于一定阈值的点,例如所述概率可以是0.3。在一些实施例中,所述边缘概率图可以通过将待处理图像上点的坐标输入获取单元1010获取的分类器中计算待处理图像上各点作为腔室边缘的概率,并根据各点的概率分布得到。在一些实施例中,可以计算待处理图像上第一边缘点梯度方向对应的角度θ和
Figure PCTCN2017083184-appb-000004
并根据R-table确定第一边缘点的偏移量,以第一边缘点的坐标值和所有对应偏移量的差值作为所有可能边缘参考点的坐标值。进一步地,可以根据边缘参考点投票次数和对应第一边缘点的概率值对所有边缘参考点进行加权累加。所述加权累加可以是将位于同一边缘参考点对应的第一边缘点的概率累加。在一些实施例中,可以将概率累加值最大的边缘参考点所对应的R-table中的参数作为待处理图像的变换参数。所述变换参数可以包括旋转角度和缩放比例等。所述加权累加的方法用公式可以表示为:
Figure PCTCN2017083184-appb-000005
其中,i为第一边缘点的索引,j为投票图像上被投票的可能边缘参考点的索引,p为每个第一边缘点的概率值,σ为0,1二值函数,即当第i个第一边缘点在第j个可能边缘参考点有投票贡献时,该值为1,否则为0。
在步骤1130中,可以得到待处理图像对应的模型。具体地,可以基于所确定的加权广义霍夫变换参数,对待处理图像上的第一边缘点进行变换。例如,可以根据边缘参考点对应的R-table中的角度和缩放比例,变换待处理图像上第一边缘点的坐标,并把平均模型上的相应信息对应到待处理图像上,得到与平均网格模型对应的待处理图像。
图12是根据本申请的一些实施例所示的示例调整模块的结构示意图。如图12所示,所述调整模块470可以包括一个获取单元1210,一个目标点确定单元1220,一个模型变换单元1230。所述调整模块470内各单元之间的连接方式可以是有线的、无线的或两者的结合。任何一个单元都可以是本地的、远程的或两者的结合。
获取单元1210可以获取模型和训练好的第二分类器。具体地,获取单元1210可以获取模型上第二边缘点的坐标数据。在一些实施例中,所述模型的第二边缘点可以是模型上的控制点。在一些实施例中,获取单元1210可以获取训练模块450训练好的第二分类器。所述分类器可以是基于腔室及边缘锐利程度划分的10个腔室类别通过PBT分类算法训练得到10个分类器,例如左心室边缘、左心房锐利边缘、左心房非锐利边缘、右心室锐利边缘、右心室非锐利边缘、右心房锐利边缘、右心房非锐利边缘、主动脉边缘、左心肌锐利边缘和左心肌非锐利边缘。这是因为,某个腔室边缘内外两侧的灰度变化不明显,锐利程度较低,因此,未对其依据锐利程度分类。在一些实施例中,获取单元1210可以获取经过模型变换单元1230处理后的模型。
目标点确定单元1220可以确定模型上第二边缘点对应的目标点。以模型上的一个第二边缘点为例,目标点确定单元1220可以确定所述一个模型第二边缘点周围的多个候选点。在一些实施例中,目标点确定单元1220可以将确定的所述一个模型第二边缘点周围的多个候选点输入到获取单元1210获取的分类器中,确定所述一个模型第二边缘点及其周围多个候选点对应于图像边缘的概率,并根据所述概率确定所述一个模型第二边缘点的目标点。在一些实施例中,目标点确定单元1220可以确定模型上所有 第二边缘点的对应目标点。
模型变换单元1230可以对模型进行调整。在一些实施例中,模型变换单元1230可以基于目标点确定单元1220所确定的目标点调整模型边缘点的位置。所述调整可以包括相似性变换、分段仿射变换和/或基于能量函数的微变等。在一些实施例中,模型变换单元1230可以重复多次调整模型,且每次的调整均需要重新确定目标点。具体地,在一些实施例中,模型变换单元1230可以判断模型调整后是否满足预设条件。例如,模型调整次数是否达到一定阈值。若模型调整次数达到一定阈值,则输出精确匹配的模型;若模型调整次数小于所述预设阈值,则发送信号至目标点确定单元1220,再次进行目标点的确定,然后由模型变换单元1230再次进行模型边缘点的变换。在一些实施例中,模型变换单元1230可以获得精确调整后的心脏腔室模型。所述精确调整后的心脏腔室模型可以与真实心脏非常接近。
需要注意的是,上述对于调整模块470的描述,仅为描述方便,并不能把本申请限制在所举实施例范围之内。可以理解,对于本领域的技术人员来说,在了解该模块的工作原理后,可能在不背离这一原理的情况下,对该模块中各个单元进行任意组合,或者构成子系统与其他单元连接,对实施上述模块的形式和细节上作各种修正和改变。例如,模型变换单元1230可以预先设定循环次数,而不需要通过阈值判断来确定精确调整模块470的循环次数。诸如此类的变形,均在本申请的保护范围之内。
图13是根据本申请的一些实施例所示的调整模型的示例性流程图。在步骤1310中,可以获取模型上的第二边缘点和训练好的分类器。在一些实施例中,获取单元1210和获取单元1010获取的分类器不是同一类型。所述获取单元1010获取的分类器可以是训练模块450取平均网格模型边缘一定范围内的点为正样本训练得到。所述获取单元1210获取的分类器可以是取待处理图像边缘一定范围内的点为正样本训练得到。在一些实施例中,获取单元1010获取的分类器可以是第一分类器,获取单元1210获取的分类器可以是第二分类器。
在步骤1320中,可以基于第二分类器确定模型上第二边缘点的目标点。在一些实施例中,步骤1320可以将一个模型第二边缘点一定范围内的候选点输入到第二分类器中,并获得所述模型第二边缘点一定范围内的候选点属于图像边缘的概率。在一些实施例中,可以基于所确定的概率通过目标点确定单元1220确定所述一个模型第二边缘点的目标点。在一些实施例中,模型上的第二边缘点可以为模型上的腔室内边缘点 (第二边缘的内边缘),对应心脏图像数据第一边缘的内边缘。将第二边缘点变换至目标点的过程可以为精确匹配模型上腔室内边缘与心脏图像数据第一边缘的内边缘的过程。所述内边缘可以指用于精确匹配的边缘,当本申请披露的方法使用于其他物体、器官或组织时,所述内边缘不一定是几何上的内部也不一定在所述外边缘的内部。
在步骤1330中,可以基于确定的目标点将模型上的第二边缘点变换至目标点。在一些实施例中,步骤1330可以采用多种变换方式对模型第二边缘点进行变换。例如,可以通过模型变换单元1230采用相似性变化和仿射变换对模型第二边缘点进行修正。
在步骤1340中,可以判断调整结果是否满足预设条件。在一些实施例中,预设条件可以是调整次数是否达到一定阈值。在一些实施例中,所述阈值是可以调整的。当调整次数达到一定阈值时,进入步骤1350,并输出精确匹配后的模型;当调整次数小于一定阈值时,返回步骤1320,可以基于新的模型边缘点通过目标点确定单元1220确定新的模型边缘点对应的目标点。
图14是根据本申请的一些实施例所示的确定目标点的示例性流程图。流程1400可以是目标点确定单元1220实现。图14中是确定平均模型边缘上一点的相应目标点的过程,但是本领域技术人员应当理解的是,该方法可以用于获得多个边缘点对应的多个目标点。在一些实施例中,流程1400可以与步骤1320相对应。
在步骤1410中,可以确定一个平均模型边缘点的法线。在一些实施例中,所述法线的方向是由平均模型内部指向外部。具体的法线获取方法可以参见,例如,流程1500及其描述。
在步骤1420中,可以获取沿所述一个平均模型边缘点法线方向的步长及搜索范围。在一些实施例中,所述步长及搜索范围可以是预先设定好的值。在一些实施例中,所述步长及搜索范围可以是用户输入的。例如,用户可以由外部设备通过通信端口250输入到处理设备130中。在一些实施例中,所述搜索范围为以所述一个模型边缘点为起点,沿法线所在直线两个方向(向模型外侧或内侧)中至少一个方向的线段。
在步骤1430中,基于步长及搜索范围,可以确定一个或多个候选点。例如,搜索范围为10厘米,步长设置为1厘米,可以沿法线所在直线两个方向各确定10个点,共21个候选点(包括边缘点本身)。在一些实施例中,也可以确定步长和步数,并根据步长和步数确定候选点。例如,步长设置为0.5厘米,步数设置为3,可以沿法线所在直线两个方向各确定3个点,最远的候选点距离边缘点1.5cm,共7个候选点。
在步骤1440中,可以确定所述一个或多个候选点对应于图像边缘一定范围的概率。在一些实施例中,第二分类器是取图像边缘一定范围内的点为正样本训练得到。所述一定范围可以为一预设值由机器或用户设置。例如,所述预设值可以为1厘米。
在步骤1450中,可以基于所述一个或多个候选点对应于图像边缘一定范围的概率,确定所述一个或多个候选点中的一个为目标点。在一些实施例中,目标点可以基于以下函数获得:
Fi=max(Pi-λ*di 2)      (2)
其中,Pi为候选点对应于图像边缘一定范围的概率;di为候选点与所述一个平均模型边缘点的欧氏距离;λ为权重,是常量用以平衡距离与概率值的关系。
在一些实施例中,可以基于流程1400确定出多个模型边缘点的多个目标点,然后根据所述多个目标点对多个模型边缘点和模型进行变换。具体的变换过程可以参见,例如,图16及其描述。
图15是根据本申请的一些实施例所示的确定边缘点法线的示例性流程图。在一些实施例中,流程1500可以与步骤1420相对应。
在步骤1510中,可以根据平均模型的多个边缘点确定多个多边形。在一些实施例中,所述多个多边形可以通过连接所述多个边缘点形成。所述多个多边形可以为三角形、四边形、多边形等形状。在一些实施例中,根据多个边缘点确定多个多边形的过程也可被称为网格化处理。其中,所述多个多边形可以被称为网格,所述多个边缘点可以被称为节点。在一些实施例中,平均模型表面可能已经形成与所述平均模型边缘点对应的多个多边形,在此情况下,步骤1510可以被省略。
在步骤1520中,可以确定与一个平均模型边缘点的相邻的多个多边形。
在步骤1530中,可以确定所述多个多边形的所属平面对应的多个法线。在一些实施例中,所述多个多边形的所属平面对应的多个法线方向位于同侧(平均模型外侧或内侧)。在一些实施例中,所述多个多边形的所属平面对应的多个法线向量为单位向量。
在步骤1540中,可以基于所述多个法线确定所述边缘点的法线。在一些实施例中,可以将所述多个多边形对应的多个法线向量相加或者取平均。
图16是根据本申请的一些实施例所示的变换平均模型边缘点的示例性流程图。在一些实施例中,流程1600可以是模型变换单元1230实现。
在步骤1610中,可以对平均模型边缘点执行相似性变换。例如,可以将平均模型边缘点组成的网格作为一个整体,根据腔室边缘点确定的目标点方向,对平均模型整体进行变换,主要包括平移、旋转、缩放等操作。
在步骤1620中,可以对平均模型边缘点执行分段仿射变换。在一些实施例中,平均模型边缘点组成的网格可以被按照一定规则进行划分。例如,可以按照心脏腔室对心脏模型进行划分。如图24所示,模型网格可以依据腔室被划分为左心室、左心房、右心室、右心房、主动脉以及左心肌六个部分。在一些实施例中,分段仿射变换指的是将划分的各个部分的网格分别进行仿射变换。所述仿射变换可以指对各个部分的多个节点分别进行移动变换和形状变换。在一些实施例中,平均模型边缘点可能受到多个腔室的影响。平均模型边缘点受到不同腔室影响的作用可以以关联因子的形式表示出来。在进行仿射变换时,平均模型边缘点可以朝向目标点转换。在转换的过程中,平均模型边缘点由于受到多个腔室影响。关联因子会成为转换参数(如移动位移,变形比例等)的权重值。根据边缘点对应的目标点和关联因子,模型变换单元1230采用分段仿射变换将平均模型多段网格上的边缘点分别转换至其对应的位置。
在步骤1630中,可以对平均模型边缘点执行基于能量函数的微变。在一些实施例中,能量函数可以表示为:
Figure PCTCN2017083184-appb-000006
其中,Eext为外部能量,表示当前点与检测到目标点的关系;Eint为内部能量,表示当前点与所述平均模型的一个边缘点的关系;α为权重,用来平衡内外部能量,不同腔室使用不同的权重;c表示各腔室。当当前点既接近目标点又接近所述平均模型的一个边缘点时,则能量函数最小,即求得最优坐标点。总能量E越小,结果越准确。
外部能量函数可以表示为:
Figure PCTCN2017083184-appb-000007
其中,i为各;wi为各点所占的权重(即该点的可靠性);当前点坐标为vi,经PBT分类器检测到的点为
Figure PCTCN2017083184-appb-000008
为点的梯度(向量),
Figure PCTCN2017083184-appb-000009
为梯度值大小。
内部能量函数可以表示为:
Eint=∑ijkwi,k((vi-vj)-Taffine,k(mi-mj))2      (5)
其中,i为各点,j为点i的邻域(则vi-vj对应于当前点位置各三角形的边);wi,k为关联 因子(各腔室k对当前点i的因子);mi,mj为平均模型上的点(由PDM/ASM求得);mi-mj对应于mesh平均模型各三角形的边),Taffine,k为各腔室k仿射变换PAT所求得的变换关系。其中,点坐标vi都是空间三维的。
经过加权广义霍夫变换、模型调整和模型变换,可以获得精确匹配的模型和图像。如图25所示,精确匹配后的模型心脏各腔室被清晰、明确地分割出来。
上文已对基本概念做了描述,显然,对于本领域技术人员来说,上述发明披露仅仅作为示例,而并不构成对本申请的限定。虽然此处并没有明确说明,本领域技术人员可能会对本申请进行各种修改、改进和修正。该类修改、改进和修正在本申请中被建议,所以该类修改、改进、修正仍属于本申请示范实施例的精神和范围。
同时,本申请使用了特定词语来描述本申请的实施例。如“一个实施例”、“一实施例”、和/或“一些实施例”意指与本申请至少一个实施例相关的某一特征、结构或特点。因此,应强调并注意的是,本说明书中在不同位置两次或多次提及的“一实施例”或“一个实施例”或“一替代性实施例”并不一定是指同一实施例。此外,本申请的一个或多个实施例中的某些特征、结构或特点可以进行适当的组合。
此外,本领域技术人员可以理解,本申请的各方面可以通过若干具有可专利性的种类或情况进行说明和描述,包括任何新的和有用的工序、机器、产品或物质的组合,或对他们的任何新的和有用的改进。相应地,本申请的各个方面可以完全由硬件执行、可以完全由软件(包括固件、常驻软件、微码等)执行、也可以由硬件和软件组合执行。以上硬件或软件均可被称为“数据块”、“模块”、“引擎”、“单元”、“组件”或“系统”。此外,本申请的各方面可能表现为位于一个或多个计算机可读介质中的计算机产品,该产品包括计算机可读程序编码。
计算机可读信号介质可能包含一个内含有计算机程序编码的传播数据信号,例如在基带上或作为载波的一部分。该传播信号可能有多种表现形式,包括电磁形式、光形式等等、或合适的组合形式。计算机可读信号介质可以是除计算机可读存储介质之外的任何计算机可读介质,该介质可以通过连接至一个指令执行系统、装置或设备以实现通讯、传播或传输供使用的程序。位于计算机可读信号介质上的程序编码可以通过任何合适的介质进行传播,包括无线电、电缆、光纤电缆、RF、或类似介质、或任何上述介质的组合。
本申请各部分操作所需的计算机程序编码可以用任意一种或多种程序语言编写, 包括面向对象编程语言如Java、Scala、Smalltalk、Eiffel、JADE、Emerald、C++、C#、VB.NET、Python等,常规程序化编程语言如C语言、Visual Basic、Fortran 2003、Perl、COBOL 2002、PHP、ABAP,动态编程语言如Python、Ruby和Groovy,或其他编程语言等。该程序编码可以完全在用户计算机上运行、或作为独立的软件包在用户计算机上运行、或部分在用户计算机上运行部分在远程计算机运行、或完全在远程计算机或服务器上运行。在后种情况下,远程计算机可以通过任何网络形式与用户计算机连接,比如局域网(LAN)或广域网(WAN),或连接至外部计算机(例如通过因特网),或在云计算环境中,或作为服务使用如软件即服务(SaaS)。
此外,除非权利要求中明确说明,本申请所述处理元素和序列的顺序、数字字母的使用、或其他名称的使用,并非用于限定本申请流程和方法的顺序。尽管上述披露中通过各种示例讨论了一些目前认为有用的发明实施例,但应当理解的是,该类细节仅起到说明的目的,附加的权利要求并不仅限于披露的实施例,相反,权利要求旨在覆盖所有符合本申请实施例实质和范围的修正和等价组合。例如,虽然以上所描述的系统组件可以通过硬件设备实现,但是也可以只通过软件的解决方案得以实现,如在现有的服务器或移动设备上安装所描述的系统。
同理,应当注意的是,为了简化本申请披露的表述,从而帮助对一个或多个发明实施例的理解,前文对本申请实施例的描述中,有时会将多种特征归并至一个实施例、附图或对其的描述中。但是,这种披露方法并不意味着本申请对象所需要的特征比权利要求中提及的特征多。实际上,实施例的特征要少于上述披露的单个实施例的全部特征。
一些实施例中使用了描述成分、属性数量的数字,应当理解的是,此类用于实施例描述的数字,在一些示例中使用了修饰词“大约”、“近似”或“大体上”等来修饰。除非另外说明,“大约”、“近似”或“大体上”表明所述数字允许有±20%的变化。相应地,在一些实施例中,说明书和权利要求中使用的数值参数均为近似值,该近似值根据个别实施例所需特点可以发生改变。在一些实施例中,数值参数应考虑规定的有效数位并采用一般位数保留的方法。尽管本申请一些实施例中用于确认其范围广度的数值域和参数为近似值,在具体实施例中,此类数值的设定在可行范围内尽可能精确。
最后,应当理解的是,本申请中所述实施例仅用以说明本申请实施例的原则。其他的变形也可能属于本申请的范围。因此,作为示例而非限制,本申请实施例的替代配置可视为与本申请的教导一致。相应地,本申请的实施例不仅限于本申请明确介绍和描述的 实施例。

Claims (20)

  1. 一种方法,包括:
    获取图像数据;
    基于所述图像数据,重建图像,其中,所述图像包括一个或多个第一边缘;
    获取一个模型,其中,所述模型包括与所述一个或多个第一边缘相对应的一个或多个第二边缘;
    根据所述一个或多个第一边缘和所述一个或多个第二边缘,匹配所述模型与所述重建后的图像;以及
    根据所述一个或多个第一边缘,调整所述模型的一个或多个第二边缘。
  2. 权利要求1所述的方法,所述图像数据包括脑部图像、颅骨图像、胸部图像、心脏图像、乳腺图像、腹部图像、肾脏图像、肝脏图像、骨盆图像、会阴部图像、肢体图像、脊椎图像或椎骨图像。
  3. 权利要求1所述的方法,获取所述模型包括:
    获取多个参考模型;
    对获取的多个参考模型进行配准;
    确定配准后所述多个参考模型上的多个控制点;
    基于所述多个参考模型上的多个控制点,获得所述模型的控制点;以及
    根据所述模型的控制点,生成所述模型。
  4. 权利要求3所述的方法,进一步包括:
    根据所述模型的控制点与所述模型中所述一个或多个第二边缘的关系,生成所述模型的控制点的关联因子。
  5. 权利要求1所述的方法,调整所述模型的一个或多个第二边缘包括:
    确定所述第二边缘上的一个参考点;
    确定与所述参考点相对应的目标点;以及
    根据所述目标点,调整所述模型的第二边缘。
  6. 权利要求5所述的方法,所述确定与所述参考点相对应的目标点包括:
    确定所述参考点的法线;
    获取步长和搜索范围;
    根据所述步长和搜索范围,沿法线确定一个或多个候选点;
    获取一个第一分类器;
    根据所述第一分类器,确定所述一个或多个候选点对应于所述第一边缘的概率;以及
    基于所述一个或多个候选点对应于所述第一边缘的概率,确定所述目标点。
  7. 权利要求6所述的方法,所述确定所述参考点的法线包括:
    确定与所述参考点相邻的一个或多个多边形网格;
    确定所述一个或多个多边形网格对应的一个或多个法线;以及
    根据所述一个或多个法线,确定所述参考点的法线。
  8. 权利要求4所述的方法,调整所述模型的第二边缘包括:
    对所述第二边缘进行相似性变换;
    根据关联因子,对所述第二边缘进行仿射变换;或
    基于一个能量函数,对所述第二边缘进行微调。
  9. 权利要求1所述的方法,所述匹配模型与重建的图像包括:
    获取第二分类器;
    根据第二分类器,进行加权广义霍夫变换;以及
    根据加权广义霍夫变换的结果,匹配所述模型和图像。
  10. 权利要求6所述的方法,所述获取第一分类器包括:
    获取点分类器,所述点分类器对所述第一边缘的多个点根据与锐利程度和所处位置相关的图像特征进行分类;
    获取点分类器分类后的多个点,其中至少一部分所述点分类器分类后的多个点位于第一边缘一定范围内;
    确定第一边缘一定范围内的点分类器分类后的多个点为正样本;
    确定第一边缘一定范围以外的点分类器分类后的多个点为负样本;
    对所述正样本和负样本进行分类;以及
    根据分类后的正样本和负样本获得训练后的第一分类器。
  11. 权利要求9所述的方法,所述获取第二分类器包括:
    获取模型的多个点,其中至少一部分所述多个点位于第二边缘一定范围内;
    确定第二边缘一定范围内的点为正样本;
    确定第二边缘一定范围以外的点为负样本;
    根据锐利程度和所处位置,对所述正样本和负样本进行分类;以及
    根据分类后的正样本和负样本获得训练后的第二分类器。
  12. 一个系统,包括:
    一个存储器,被配置为存储数据及指令;
    一个与存储器建立通信的处理器,其中,当执行存储器中的指令时,所述处理器被配置为:
    获取图像数据;
    基于所述图像数据,重建图像,其中,所述图像包括一个或多个第一边缘;
    获取一个模型,其中,所述模型包括与所述一个或多个第一边缘相对应的一个或多个第二边缘;
    根据所述一个或多个第一边缘和所述一个或多个第二边缘,匹配所述模型与所述重建后的图像;以及
    根据所述一个或多个第一边缘,调整所述模型的一个或多个第二边缘。
  13. 权利要求12所述的系统,所述处理器被进一步配置为:
    获取多个参考模型;
    对获取的多个参考模型进行配准;
    确定配准后所述多个参考模型上的多个控制点;
    基于所述多个参考模型上的多个控制点,获得所述模型的控制点;以及
    根据所述模型的控制点,生成所述模型。
  14. 权利要求13所述的系统,所述处理器被进一步配置为:
    根据所述模型的控制点与所述模型中所述一个或多个第二边缘的关系,生成所述模型的控制点的关联因子。
  15. 权利要求12所述的系统,所述处理器被进一步配置为:
    确定所述第二边缘上的一个参考点;
    确定与所述参考点相对应的目标点;以及
    根据所述目标点,调整所述模型的第二边缘。
  16. 权利要求15所述的系统,所述处理器被进一步配置为:
    确定所述参考点的法线;
    获取步长和搜索范围;
    根据所述步长和搜索范围,沿法线确定一个或多个候选点;
    获取一个第一分类器;
    根据所述第一分类器,确定所述一个或多个候选点对应于所述第一边缘的概率;以及
    基于所述一个或多个候选点对应于所述第一边缘的概率,确定所述目标点。
  17. 权利要求14所述的系统,所述处理器被进一步配置为:
    对所述第二边缘进行相似性变换;
    根据关联因子,对所述第二边缘进行仿射变换;或
    基于一个能量函数,对所述第二边缘进行微调。
  18. 权利要求12所述的系统,所述处理器被进一步配置为:
    获取第二分类器;
    根据第二分类器,进行加权广义霍夫变换;以及
    根据加权广义霍夫变换的结果,匹配所述模型和图像。
  19. 权利要求16所述的系统,所述处理器被进一步配置为:
    获取点分类器,所述点分类器对所述第一边缘的多个点根据与锐利程度和所处位置相关的图像特征进行分类;
    获取点分类器分类后的多个点,其中至少一部分所述点分类器分类后的多个点位于第一边缘一定范围内;
    确定第一边缘一定范围内的点分类器分类后的多个点为正样本;
    确定第一边缘一定范围以外的点分类器分类后的多个点为负样本;
    对所述正样本和负样本进行分类;以及
    根据分类后的正样本和负样本获得训练后的第一分类器。
  20. 一个存有计算机程序的永久的计算机可读媒质,该计算机程序包括指令,该指令可由至少一个处理器执行以实现一种方法,所述方法包括:
    获取图像数据;
    基于所述图像数据,重建图像,其中,所述图像包括一个或多个第一边缘;
    获取一个模型,其中,所述模型包括与所述一个或多个第一边缘相对应的一个或多个第二边缘;
    根据所述一个或多个第一边缘和所述一个或多个第二边缘,匹配所述模型与所述重建后的图像;以及
    根据所述一个或多个第一边缘,调整所述模型的一个或多个第二边缘。
PCT/CN2017/083184 2017-05-05 2017-05-05 一种图像分割方法及系统 WO2018201437A1 (zh)

Priority Applications (4)

Application Number Priority Date Filing Date Title
EP17908558.4A EP3608872B1 (en) 2017-05-05 2017-05-05 Image segmentation method and system
PCT/CN2017/083184 WO2018201437A1 (zh) 2017-05-05 2017-05-05 一种图像分割方法及系统
US16/674,172 US11170509B2 (en) 2017-05-05 2019-11-05 Systems and methods for image segmentation
US17/454,053 US11935246B2 (en) 2017-05-05 2021-11-08 Systems and methods for image segmentation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2017/083184 WO2018201437A1 (zh) 2017-05-05 2017-05-05 一种图像分割方法及系统

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US16/674,172 Continuation US11170509B2 (en) 2017-05-05 2019-11-05 Systems and methods for image segmentation

Publications (1)

Publication Number Publication Date
WO2018201437A1 true WO2018201437A1 (zh) 2018-11-08

Family

ID=64016657

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/083184 WO2018201437A1 (zh) 2017-05-05 2017-05-05 一种图像分割方法及系统

Country Status (3)

Country Link
US (2) US11170509B2 (zh)
EP (1) EP3608872B1 (zh)
WO (1) WO2018201437A1 (zh)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10878569B2 (en) * 2018-03-28 2020-12-29 International Business Machines Corporation Systems and methods for automatic detection of an indication of abnormality in an anatomical image
CN112381719B (zh) * 2020-11-24 2024-05-28 维沃移动通信有限公司 图像处理方法及装置
CN112767530B (zh) * 2020-12-17 2022-09-09 中南民族大学 心脏图像三维重建方法、装置、设备及存储介质

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103310449A (zh) * 2013-06-13 2013-09-18 沈阳航空航天大学 基于改进形状模型的肺分割方法
US20160063726A1 (en) * 2014-08-28 2016-03-03 Koninklijke Philips N.V. Model-based segmentation of an anatomical structure
CN105719278A (zh) * 2016-01-13 2016-06-29 西北大学 一种基于统计形变模型的器官辅助定位分割方法
CN105976384A (zh) * 2016-05-16 2016-09-28 天津工业大学 基于GVF Snake模型的人体胸腹腔CT图像主动脉分割方法

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9070207B2 (en) * 2007-09-06 2015-06-30 Yeda Research & Development Co., Ltd. Modelization of objects in images
US8265363B2 (en) * 2009-02-04 2012-09-11 General Electric Company Method and apparatus for automatically identifying image views in a 3D dataset
BR112015023898A2 (pt) * 2013-03-21 2017-07-18 Koninklijke Philips Nv aparelho de processamento de imagens, método de processamento de imagens, elemento de programa de computador para controlar um aparelho e mídia legível por computador
CN110214342A (zh) * 2017-01-18 2019-09-06 富士通株式会社 建模装置、建模方法以及建模程序
US10482619B2 (en) * 2017-07-27 2019-11-19 AI Incorporated Method and apparatus for combining data to construct a floor plan

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103310449A (zh) * 2013-06-13 2013-09-18 沈阳航空航天大学 基于改进形状模型的肺分割方法
US20160063726A1 (en) * 2014-08-28 2016-03-03 Koninklijke Philips N.V. Model-based segmentation of an anatomical structure
CN105719278A (zh) * 2016-01-13 2016-06-29 西北大学 一种基于统计形变模型的器官辅助定位分割方法
CN105976384A (zh) * 2016-05-16 2016-09-28 天津工业大学 基于GVF Snake模型的人体胸腹腔CT图像主动脉分割方法

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
LING, HUIQIANG ET AL.: "Application of Active Shape Model in Segmentation of Liver CT Image", JOURNAL OF ZHEJIANG UNIVERSITY OF TECHNOLOGY, vol. 40, no. 4, 31 August 2012 (2012-08-31), pages 451 - 452, XP009517181, ISSN: 1006-4303 *
See also references of EP3608872A4 *

Also Published As

Publication number Publication date
EP3608872A1 (en) 2020-02-12
EP3608872B1 (en) 2023-07-12
EP3608872A4 (en) 2020-02-19
US11935246B2 (en) 2024-03-19
US20220058806A1 (en) 2022-02-24
US20200065974A1 (en) 2020-02-27
US11170509B2 (en) 2021-11-09

Similar Documents

Publication Publication Date Title
Fu et al. A review of deep learning based methods for medical image multi-organ segmentation
US11344273B2 (en) Methods and systems for extracting blood vessel
CN107220965B (zh) 一种图像分割方法及系统
Ghose et al. A survey of prostate segmentation methodologies in ultrasound, magnetic resonance and computed tomography images
US10482604B2 (en) Systems and methods for image processing
US8331637B2 (en) System and method of automatic prioritization and analysis of medical images
CN107424162B (zh) 一种图像分割方法及系统
US9218542B2 (en) Localization of anatomical structures using learning-based regression and efficient searching or deformation strategy
Xing et al. Lesion segmentation in ultrasound using semi-pixel-wise cycle generative adversarial nets
US11935246B2 (en) Systems and methods for image segmentation
CN111008984A (zh) 医学影像中正常器官的轮廓线自动勾画方法及系统
WO2009045471A1 (en) System and method for organ segmentation using surface patch classification in 2d and 3d images
Wu et al. Prostate segmentation based on variant scale patch and local independent projection
CN107220984B (zh) 图像分割方法、图像分割系统及图像分割装置
US20220301224A1 (en) Systems and methods for image segmentation
CN107230211B (zh) 一种图像分割方法及系统
Savaashe et al. A review on cardiac image segmentation
JP2017127623A (ja) 画像処理装置、画像処理方法、およびプログラム
Huang et al. [Retracted] Adoption of Snake Variable Model‐Based Method in Segmentation and Quantitative Calculation of Cardiac Ultrasound Medical Images
Sharif et al. A Quick Review on Cardiac Image Segmentation
Sreelekshmi et al. A Review on Multimodal Medical Image Fusion
Dickson et al. Sparse deep belief network coupled with extended local fuzzy active contour model-based liver cancer segmentation from abdomen CT images
Sallam Deep Structure Base On Convolutional Neural Networks For Identificatio Of Chest Diesases
Cui et al. A 3D Segmentation Method for Pulmonary Nodule Image Sequences based on Supervoxels and Multimodal Data
Wu et al. A dual-path U-Net for pulmonary vessel segmentation method based on lightweight 3D attention

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17908558

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2017908558

Country of ref document: EP

Effective date: 20191105