CN111968069A - Image processing method, image processing device and computer readable storage medium - Google Patents

Image processing method, image processing device and computer readable storage medium Download PDF

Info

Publication number
CN111968069A
CN111968069A CN201910419074.1A CN201910419074A CN111968069A CN 111968069 A CN111968069 A CN 111968069A CN 201910419074 A CN201910419074 A CN 201910419074A CN 111968069 A CN111968069 A CN 111968069A
Authority
CN
China
Prior art keywords
point
network
points
images
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910419074.1A
Other languages
Chinese (zh)
Other versions
CN111968069B (en
Inventor
石磊
倪浩
郑永升
魏子昆
华铱炜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Yitu Medical Technology Co ltd
Original Assignee
Hangzhou Yitu Medical Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Yitu Medical Technology Co ltd filed Critical Hangzhou Yitu Medical Technology Co ltd
Priority to CN201910419074.1A priority Critical patent/CN111968069B/en
Publication of CN111968069A publication Critical patent/CN111968069A/en
Application granted granted Critical
Publication of CN111968069B publication Critical patent/CN111968069B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10088Magnetic resonance imaging [MRI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30008Bone

Abstract

The present disclosure relates to a processing method, a processing apparatus, and a computer medium of an image, the processing method including: acquiring a set of anchor points of an object of interest in a first set of images, the first set of images being usable for reconstructing a 3D image; initializing a packet network based on a set of anchor points; carrying out iterative computation on the envelope network to obtain a target envelope network, wherein the target envelope network wraps the first preset proportion of the group of positioning points, and each positioning point of the second preset proportion of the group of positioning points is attached to at least one point on the target envelope network; determining pixels corresponding to points on the target envelope network in the first group of images, and determining the intensity value of the points on the target envelope network based on the intensity value of the region where the corresponding pixels are located; and obtaining the 2D second image based on the mapping relation between the target envelope network and the second image and the intensity value of each point on the target envelope network. The processing method can automatically and accurately generate the 2D image according to the first group of images so as to improve the diagnosis efficiency of doctors.

Description

Image processing method, image processing device and computer readable storage medium
Technical Field
The present disclosure relates generally to image processing and analysis. More particularly, the present disclosure relates to a method of processing an image, a processing apparatus, and a computer-readable storage medium.
Background
In diagnosing a fracture from an image, it is very important to quickly find and locate the fracture position, for example, the fracture position of the rib is the number of ribs and the relative position of the fracture is located according to the rib image.
At present, the fracture condition is observed on a series of 2D images (such as 2D CT slice images and MR slice images) in a 3D angle, the images need to be observed frame by frame, the number of bone serial numbers is counted, the process is complicated, the consumed time is long, and the diagnosis efficiency of doctors is low.
Technical solutions of the present disclosure are proposed to solve the above problems.
Disclosure of Invention
The present disclosure is directed to providing an image processing method, an image processing apparatus, and a computer-readable storage medium, which can automatically and accurately generate 2D images from a first set of images that can be used to reconstruct a 3D image including an object of interest, and from the generated 2D images, a doctor can clearly and intuitively view the situation of the object of interest, and can improve the diagnosis efficiency of the doctor.
According to a first aspect of the present disclosure, there is provided a processing method of an image, the processing method including: acquiring a set of localization points of an object of interest in a first set of images, the first set of images being usable for reconstructing a 3D image comprising the object of interest; initializing a packet network based on a set of anchor points of the object of interest; performing iterative computation on the network packet to obtain a target network packet, so that the target network packet wraps a first preset proportion of the group of positioning points, and each positioning point of a second preset proportion of the group of positioning points is attached to at least one point on the target network packet; determining pixels corresponding to the points on the target envelope network in the first group of images, and determining the intensity values of the points on the target envelope network based on the intensity values of the areas where the corresponding pixels in the first group of images are located; obtaining a second image based on a mapping relation between the target envelope network and the second image and the intensity value of each point on the target envelope network, wherein the second image is a 2D image.
In some embodiments, the object of interest comprises a rib and the first set of images comprises a set of slice images.
In some embodiments, initializing a packet network based on a set of anchor points for the object of interest comprises: determining a central axis based on the location points of the object of interest; establishing a first circle by taking each point on the central shaft as a circle center, wherein the radius of the first circle is determined based on positioning points in the corresponding tangent plane, so that each positioning point in the corresponding tangent plane is positioned in the first circle; setting a plurality of initial points on the first circle; connecting adjacent initiation points.
In some embodiments, iteratively computing the packet network comprises: determining a relative offset vector based on points on the envelope network and a set of anchor points of the object of interest; shifting each point on the envelope network to each positioning point of the attention object by using the relative shift vector; and when the iteration is carried out until the offset distance of each point on the envelope network is smaller than a preset threshold value, obtaining the target envelope network.
In some embodiments, determining a relative offset vector based on points on the envelope network and a set of anchor points of the object of interest comprises: determining a first relative offset vector based on a point on the packet network and a location point of the object of interest having a minimum distance to the point; determining a center of gravity based on a point on the packet network and points adjacent to the point, determining a second relative offset vector based on the center of gravity and the point on the packet network; superimposing the first relative offset vector and the second relative offset vector to obtain the relative offset vector.
In some embodiments, offsetting the points on the envelope network to the anchor points of the object of interest using the relative offset vector comprises: shifting a point on the packet network to a positioning point with the minimum distance from the point in the group of positioning points by using the relative shift vector to obtain a second packet network; repeating the iterative calculation in a case where an offset distance of a point on the second packet network is greater than or equal to a preset threshold value.
In some embodiments, determining pixels in the first set of images that correspond to points on the target envelope network comprises: interpolating each pixel in the first set of images to obtain a pixel corresponding to a point on the target packet network.
In some embodiments, determining the intensity value of the corresponding pixel on the target packet network based on the intensity value of the region in which the corresponding pixel is located in the first set of images comprises: giving corresponding weight to the intensity value of each pixel in the area where the corresponding pixel is located; and determining the intensity value of the corresponding pixel on the target packet network based on the weighted intensity value of each pixel.
According to a second aspect of the present disclosure, there is provided a processing apparatus of an image, the processing apparatus including: a communication interface configured to receive a first set of images, the first set of images including an object of interest and the first set of images being usable to reconstruct a 3D image including the object of interest; a memory having computer-executable instructions stored thereon; and a processor that, when executing the computer-executable instructions, implements a method of processing an image according to any of the present disclosure.
According to a third aspect of the present disclosure, there is provided a non-transitory computer-readable storage medium having stored thereon computer-executable instructions that, when executed by a processor, implement a method of processing an image according to any one of the present disclosure.
According to the image processing method, the image processing device and the computer readable storage medium of the various embodiments of the present disclosure, a 2D image can be automatically and accurately generated according to a first group of images which can be used for reconstructing a 3D image including an object of interest, and a doctor can clearly and intuitively view the situation of the object of interest according to the generated 2D image, so that the diagnosis efficiency of the doctor can be improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the invention, as claimed.
Drawings
In the drawings, which are not necessarily drawn to scale, like reference numerals may designate like components in different views. Like reference numerals with letter suffixes or like reference numerals with different letter suffixes may represent different instances of like components. The drawings illustrate various embodiments generally, by way of example and not by way of limitation, and together with the description and claims, serve to explain the disclosed embodiments.
FIG. 1 shows a flow diagram of a method of processing an image according to an embodiment of the present disclosure;
2(a) -2(D) are schematic diagrams illustrating a process of processing 2D slice images of a set of ribs to obtain a 2D image of one rib by using a processing method according to an embodiment of the disclosure;
FIG. 3 illustrates a flow diagram of a method of initializing a packet network in accordance with an embodiment of the disclosure;
fig. 4 shows a schematic structural diagram of an initialized packet network according to an embodiment of the present disclosure;
FIG. 5 illustrates a flow diagram for one particular embodiment of a method of iteratively computing an envelope network in accordance with the present disclosure;
6(a) -6(c) show schematic diagrams of a process of iteratively computing an envelope network according to an embodiment of the present disclosure;
fig. 7 illustrates a block diagram of an apparatus for processing an image according to an embodiment of the present disclosure.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present disclosure more clear, the technical solutions of the embodiments of the present disclosure will be described below clearly and completely with reference to the accompanying drawings of the embodiments of the present disclosure. It is to be understood that the described embodiments are only a few embodiments of the present disclosure, and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the described embodiments of the disclosure without any inventive step, are within the scope of protection of the disclosure.
Unless otherwise defined, technical or scientific terms used herein shall have the ordinary meaning as understood by one of ordinary skill in the art to which this disclosure belongs. The word "comprising" or "comprises", and the like, means that the element or item listed before the word covers the element or item listed after the word and its equivalents, but does not exclude other elements or items.
To maintain the following description of the embodiments of the present disclosure clear and concise, a detailed description of known functions and known components have been omitted from the present disclosure.
Fig. 1 shows a flowchart of a method for processing images according to an embodiment of the present disclosure, and herein, the first set of images may include, but is not limited to, a set of 2D CT slice images, 2D MR slice images, and the like, wherein each image in the first set of images has a spatial positional relationship therebetween, so as to enable reconstruction of a 3D image from the first set of images.
As shown in fig. 1, the processing method starts in step S101 by acquiring a set of anchor points of an object of interest in a first set of images, which can be used for reconstructing a 3D image comprising the object of interest. In some embodiments, the first set of images may be acquired using an existing medical imaging device, such as a computed tomography imaging device (CT), a magnetic resonance imaging device (MRI), or the like, and the object of interest may include, but is not limited to, ribs, lumbar vertebrae, cervical vertebrae, or the like. The images in the first set of images have a spatial position relationship with each other, and a 3D image including the object of interest can be reconstructed from the first set of images, and the acquired set of positioning points of the object of interest can describe at least an approximate shape of the object of interest in a three-dimensional space.
In step S102, a packet network is initialized based on a set of anchor points of the object of interest. Specifically, initializing the packet network includes: the method comprises the steps of firstly determining a central axis based on positioning points of an attention object, then determining each point on an envelope net according to the determined central axis and a group of positioning points, connecting adjacent points, forming an initialized envelope net by the points and connecting edges, wrapping at least part of the positioning points in the group of positioning points by the initialized envelope net, and keeping a certain distance between each point on the initialized envelope net and the wrapped positioning points. Specifically, there are various methods for initializing the packet network, and the shape of the initialized packet network may be a cylinder, a rectangular parallelepiped, a conical body, or the like, which is not limited herein.
In step S103, the envelope network is iteratively calculated to obtain a target envelope network, such that the target envelope network encloses a first predetermined proportion of a set of anchor points (hereinafter also referred to as "first set of anchor points"), and a second predetermined proportion of anchor points of the set of anchor points (hereinafter also referred to as "second set of anchor points") are each fitted by at least one point on the target envelope network. Specifically, iterative computation is performed on the envelope network, so that each point on the envelope network gradually shifts to a positioning point of the attention object closest to each point, the target envelope network obtained through iterative computation wraps the positioning points in the set of positioning points at a first preset proportion, and each positioning point in the positioning points at a second preset proportion in the set of positioning points is attached to at least one point on the target envelope network, so that the attention object can be accurately drawn according to the target envelope network. Specifically, the first preset proportion and the second preset proportion may be numerical values or numerical value ranges, and the first preset proportion and the second preset proportion may be the same proportion or the same ratio, and may be set and/or adjusted by a user. For example, the first preset ratio may be set to 90% or more, the second preset ratio may be set to 85% or more, and so on.
In step S104, pixels in the first set of images corresponding to the points on the target envelope network are determined, and the intensity values of the points on the target envelope network are determined based on the intensity values of the regions in which the corresponding pixels in the first set of images are located. Specifically, a three-dimensional space array of pixels may be established according to each image in the first group of images and a spatial position relationship between the images, where coordinate values of each pixel in the first group of images are integers, and coordinate values of each point on the target envelope network may be an integer or a non-integer, and in some embodiments, if the coordinate values of the point on the target envelope network are non-integers, a pixel having a minimum distance from the point in the established three-dimensional space array of pixels may be used as the pixel corresponding to the point, or the established three-dimensional space array of pixels may be processed to obtain the pixel corresponding to the point. Specifically, after determining the pixels corresponding to the respective points on the target envelope network in the first group of images, a region is defined according to the determined corresponding pixels, and the region may be a region on a two-dimensional space defined according to the corresponding pixels, or a region on a three-dimensional space defined according to the corresponding pixels and the established three-dimensional space array of pixels, and the intensity values of the points on the target envelope network corresponding to the corresponding pixels are determined based on the intensity values of the respective pixels in the region, where there are a variety of methods for determining the intensity values of the respective points on the target envelope network, and no specific limitation is made herein.
In step S105, a second image is obtained based on the mapping relationship between the target envelope network and the second image and the intensity values of the points on the target envelope network, and the second image is a 2D image, so that the situation that the first group of images need not be reconstructed in three dimensions to view the object of interest is not needed, time and resources are saved, and a higher rendering effect independent of the display can be achieved (because 3D rendering and presentation may not be needed). In some embodiments, the target packet network may be tiled, and the tiled target packet network is drawn according to the intensity value of each point on the target packet network, so as to obtain the second image, where pixels of the second image correspond to the points on the target packet network one to one. In some embodiments, after the tiled target envelope network is rendered according to the intensity values of the points on the target envelope network, the rendered image may be multiplied by a mapping matrix to obtain second images with different sizes. In some embodiments, each pixel on the rendered image may also be interpolated by an interpolation algorithm to obtain a larger-sized second image.
The image processing method provided by the embodiment of the disclosure can automatically and accurately generate the 2D image according to the first group of images which can be used for reconstructing the 3D image including the object of interest, so that a doctor can clearly and intuitively view the situation of the object of interest according to the generated 2D image, and the diagnosis efficiency of the doctor can be improved.
In some embodiments, the object of interest comprises a rib and the first set of images comprises a set of slice images. Embodiments of the present disclosure take the first set of images as a set of 2D slice images of ribs as an example, but the present disclosure is not limited thereto.
Fig. 2(a) -2(D) are schematic diagrams illustrating a process of obtaining a 2D image of a rib after processing a 2D slice image of a set of ribs by using a processing method according to an embodiment of the disclosure, where fig. 2(a) is a set of positioning points of a rib obtained according to a 2D slice image of a set of ribs, fig. 2(b) is a schematic diagram illustrating an envelope network, and performing iterative computation on the initialized envelope network to obtain a target envelope network (as shown in fig. 2 (c)), and finally, after determining intensity values of points on the target envelope network according to intensity values of regions where corresponding pixels in the first set of images are located, drawing the second image based on a mapping relationship between the target envelope network and the second image and intensity values of points on the target envelope network to obtain a 2D image of a rib (as shown in fig. 2 (D)), and from the 2D image of a rib, the doctor can clearly and intuitively observe the condition of each rib and the position where the fracture occurs (as shown by a circle in fig. 2 (D)), and compared with the current condition that the rib is observed frame by frame on a group of 2D rib slice images, the time consumption is less, and the diagnosis efficiency of the doctor can be improved.
Fig. 3 shows a flowchart of a method of initializing a packet network according to an embodiment of the present disclosure, and as shown in fig. 3, the method of initializing a packet network starts with determining a central axis based on an anchor point of an object of interest at step S301. In particular, there are various methods of determining the central axis based on the location points of the object of interest. In some embodiments, a set of positioning points of the object of interest may be all projected on the XY plane, an average of the projected positioning points is calculated, a projected positioning point having a larger distance from the average among the projected positioning points is determined, three projected positioning points having a largest distance from the average are selected, an external cylinder is established based on the three projected positioning points, and a central axis of the external cylinder is used as a central axis of the packet network. In some embodiments, four projection positioning points with the largest distance from the average value may be further selected, and an external cuboid may be established based on the four projection positioning points, and a central axis of the external cuboid may be used as a central axis of the packet network. In some embodiments, the coordinate sum of each positioning point on the XY plane may also be calculated, the coordinate sum is averaged, and a central axis of a geometric body (for example, but not limited to, a circumscribed cylinder, a circumscribed cuboid, etc. of the network packet) wrapping all the positioning points is established based on a point corresponding to the averaged coordinate, as the central axis of the network packet.
In step S302, a first circle is created with each point on the central axis as a center, and a radius of the first circle is determined based on the positioning points in the corresponding tangent plane, such that each positioning point in the corresponding tangent plane is located within the first circle. Specifically, the central axis is an axis in the Z-axis direction, and first, a plurality of points are equally spaced on the central axis, each point is used as a center of a plurality of first circles, positioning points on the XY plane where each point on the central axis is located are found, and a positioning point having a larger distance from the center of the first circle among the positioning points is selected to determine the radius of the first circle. In some embodiments, three positioning points having the largest distance from the center of the first circle may be selected, and a circumscribed circle may be established based on the three positioning points, and the radius of the circumscribed circle may be used as the radius of the first circle. In some embodiments, several positioning points with a larger distance from the center of the first circle among the positioning points may be selected to create other geometric figures, as long as all the positioning points on the corresponding tangent plane in the XY axis direction where the center of the circle is located are ensured to be located in the created first circle.
In step S303, several initial points are set on the first circle. Specifically, several initial points may be set equidistantly on the first circle.
In step S304, adjacent initial points are connected. Specifically, points adjacent to each point up and down and adjacent to each point left and right are connected with each point respectively, and the points and the connecting edges form an initialized packet network.
Fig. 4 is a schematic structural diagram of an initialized envelope network according to an embodiment of the disclosure, and as shown in fig. 4, the initialized envelope network (a grid-shaped graph) substantially encloses the entire object of interest and has a certain distance from the object of interest, and the first set of anchor points of the first preset proportion is located on or within the initialized envelope network.
In some embodiments, the iterative computation of the envelope network comprises: determining a relative offset vector based on each point on the envelope network and a set of anchor points of the object of interest; shifting each point on the envelope network to each positioning point of the attention object by using the relative shift vector; and when the iteration is carried out until the offset distance of each point on the envelope network is smaller than a preset threshold value, obtaining the target envelope network. Specifically, the relative offset vector is used for offsetting a point on the envelope network to an anchor point on the object of interest which is at a minimum distance from the point, and the relative offset vector can maintain the relative position relationship between the points on the envelope network during the offset process, so that the structure of the envelope network is not seriously distorted, folded and the like during the iterative calculation process.
Fig. 5 shows a flowchart of a specific embodiment of the method for iteratively calculating the envelope network according to the present disclosure, and as shown in fig. 5, the method for iteratively calculating the envelope network starts in step S501, and a first relative offset vector is determined based on a point on the initialized envelope network and an anchor point of an object of interest having a minimum distance from the point, and specifically, the first relative offset vector is used to guide the point on the envelope network to approach to the anchor point of the object of interest having the minimum distance from the point.
In step S502, a center of gravity is determined based on the initialized points on the envelope network and the points adjacent to the initialized points, and a second relative offset vector is determined based on the center of gravity and the initialized points on the envelope network. Specifically, the center of gravity may be determined according to the point on the envelope network after initialization, and each point adjacent to the point on the envelope network in the left-right direction and the top-bottom direction, and then a second relative offset vector that is offset from the point to the center of gravity corresponding to the second relative offset vector is calculated, and specifically, the second relative offset vector is used to maintain the relative position relationship between the points on the envelope network, so as to prevent the structure of the envelope network from being seriously distorted, folded, and the like during the offset of the point on the envelope network to the positioning point of the object of interest with the minimum distance from the point.
In step S503, the first relative offset vector and the second relative offset vector are superimposed to obtain a relative offset vector. Specifically, in the process of superimposing the first relative offset vector and the second relative offset vector, corresponding coefficients may be respectively given to the first relative offset vector and the second relative offset vector, so that each point on the envelope network can better converge, thereby obtaining the target envelope network.
In step S504, the initialized points on the envelope network are shifted to the anchor point with the minimum distance from the initialized point in the set of anchor points by using the relative shift vector, so as to obtain a second envelope network.
In step S505, it is determined whether the offset distance of the point on the second packet network is smaller than a preset threshold, if not, steps S501 to S504 are repeated, and if yes, the process proceeds to step S506.
In step S506, the iterative computation is stopped to obtain the target packet network.
Specifically, the offset distance of a point on the envelope network may be determined according to the relative offset vector, in some embodiments, the calculated offset distances of the points on the envelope network may be averaged, and when the average value of the offset distances is smaller than a preset threshold, the iterative calculation of the envelope network is stopped, so as to obtain the target envelope network. Specifically, the preset threshold is set to be small so that each anchor point in the set of anchor points is at least fit by at least one point on the target envelope network.
The flow shown in fig. 5 is by way of example only, and some of the steps in fig. 5 may be omitted or replaced in other embodiments. For example, the relative offset vector may be calculated by a superposition of the first relative offset vector and the second relative offset vector, or by other calculation methods, such as but not limited to directly using the first relative offset vector, or by multiplying the second relative offset vector by a coefficient for adjustment, and so on. For another example, in step S504, the step size of each offset may also be adjusted to avoid improper deformation of the packet network, such as twisting, folding, and the like.
Fig. 6(a) -6(c) are schematic diagrams illustrating a process of performing iterative computation on an envelope network according to an embodiment of the present disclosure, where a grid-shaped graph represents the envelope network, fig. 6(a) is an initialized envelope network, a point on the envelope network is gradually offset to an anchor point on an object of interest whose distance from the point is minimum through the iterative computation, and when the offset distance of the point iterated onto the envelope network is smaller than a preset threshold, the iterative computation is stopped, so as to obtain a target envelope network (as shown in fig. 6 (c)), where each anchor point in a second set of anchor points of a second preset proportion is attached to at least one point on the target envelope network, so that the object of interest can be drawn according to the target envelope network.
In some embodiments, determining pixels in the first set of images that correspond to points on the target envelope network comprises: the pixels in the first set of images are interpolated to obtain pixels corresponding to points on the target envelope network. Specifically, the coordinate values of the pixels in the pixel three-dimensional spatial array established based on the first group of images are all integers, and the coordinate values of the points on the target envelope network may be integers or non-integers, that is, pixels corresponding to the points on the target envelope network one to one may not be found in the established pixel three-dimensional spatial array, and the pixels corresponding to the points on the target envelope network may be obtained by performing interpolation calculation on the pixels in the pixel three-dimensional spatial array.
In some embodiments, determining the intensity value of the point on the target packet network based on the intensity values of the region in which the corresponding pixel in the first set of images is located comprises: giving corresponding weight to the intensity value of each pixel in the area where the corresponding pixel is located; the intensity value of the point on the target packet network is determined based on the weighted intensity values of the respective pixels. Specifically, after a pixel corresponding to a point on the target envelope network is found in the three-dimensional spatial array of pixels established by the first group of images, a preset pixel region may be divided based on the corresponding pixel, a distance between each pixel in the region where the corresponding pixel is located and the corresponding pixel may be calculated, a weight assigned to each pixel may be determined based on the calculated distance and an intensity value of the pixel corresponding to the distance, the intensity values of each pixel after being assigned with the weight are summed, and the summed result is taken as the intensity value of the point on the envelope network corresponding to the corresponding pixel. In some embodiments, the weight of the corresponding weight of the intensity value of each pixel in the preset pixel region may be set to 1, the intensity values of the pixels to which the corresponding weight is applied are averaged, and the obtained average value is used as the intensity value of the point on the target packet network corresponding to the corresponding pixel.
Fig. 7 is a block diagram of an image processing apparatus 700 according to an embodiment of the disclosure, as shown in fig. 7, the disclosure further provides an image processing apparatus 700, where the image processing apparatus 700 includes a communication interface 710, a memory 720 and a processor 730, specifically, the communication interface 710 is configured to receive a first set of images, the first set of images may be a series of 2D images acquired from any imaging device (e.g., CT, MRI, etc.), and the first set of images includes an object of interest, the first set of images can be used to reconstruct a 3D image including the object of interest, the memory 720 stores computer-executable instructions, and the processor 730 implements a method for processing an image according to any one of the disclosures when executing the computer-executable instructions stored on the memory 720.
The image processing device 700 according to the embodiment of the present disclosure can automatically and accurately generate a 2D image from a first set of images that can be used to reconstruct a 3D image including an object of interest, and a doctor can clearly and intuitively view the situation of the object of interest from the generated 2D image, so that the diagnosis efficiency of the doctor can be improved.
In some embodiments, processor 730 may be a processing device including more than one general purpose processing device, such as a microprocessor, Central Processing Unit (CPU), Graphics Processing Unit (GPU), or the like. More specifically, the processor 730 may be a Complex Instruction Set Computing (CISC) microprocessor, Reduced Instruction Set Computing (RISC) microprocessor, Very Long Instruction Word (VLIW) microprocessor, processor running other instruction sets, or processors running a combination of instruction sets. The processor 730 may also be one or more special-purpose processing devices such as an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA), a Digital Signal Processor (DSP), a system on a chip (SoC), or the like. The processor 730 may be communicatively coupled to the memory 720 and configured to execute computer-executable instructions stored thereon to perform a method of processing, such as an image according to various embodiments of the present disclosure.
In some embodiments, memory 720 may be a non-transitory computer-readable medium, such as Read Only Memory (ROM), Random Access Memory (RAM), phase change random access memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), Electrically Erasable Programmable Read Only Memory (EEPROM), other types of Random Access Memory (RAM), flash disk or other forms of flash memory, cache, registers, static memory, compact disc read only memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, tape cassettes or other magnetic storage devices, or any other possible non-transitory medium that may be used to store information or instructions that may be accessed by a computer device, and so forth.
In some embodiments, the communication interface 710 may include a network adapter, a cable connector, a serial connector, a USB connector, a parallel connector, a high speed data transmission adapter (such as fiber optic, USB3.0, lightning interfaces, etc.), a wireless network adapter (such as a WiFi adapter), a telecommunications (3G, 4G/LTE, etc.) adapter, and the like. The processing device 700 may be connected to other components, such as a medical image database, a CT system, etc., through a communication interface 710.
The present disclosure also provides a non-transitory computer-readable storage medium having stored thereon computer-executable instructions that, when executed by a processor, implement a method of processing an image according to any one of the present disclosure. In some embodiments, the computer-executable instructions may be implemented as a plurality of program modules that collectively implement the method of processing an image according to any one of the present disclosure.
In some embodiments, the processing device may be integrated on an existing processing platform of the image in various ways. For example, the program module can be written on an existing processing platform of the rib image by using a development interface, so that compatibility with the existing processing platform and updating of the existing processing platform are realized, the hardware cost for realizing the processing method is reduced, and popularization and application of the processing method and the device are facilitated.
The present disclosure describes various operations or functions that may be implemented as or defined as software code or instructions. Such content may be source code or differential code ("delta" or "patch" code) that may be executed directly ("object" or "executable" form). A software implementation of the embodiments described herein may be provided through an article of manufacture having code or instructions stored thereon, or through a method of operating a communication interface to transmit data through the communication interface. A machine or computer-readable storage medium may cause a machine to perform the functions or operations described, and includes any mechanism for storing information in a form accessible by a machine (e.g., computing device, electronic system, etc.), such as recordable/non-recordable media (e.g., Read Only Memory (ROM), Random Access Memory (RAM), magnetic disk storage media, optical storage media, flash memory devices, etc.). A communication interface includes any mechanism for interfacing with any of a hardwired, wireless, optical, etc. medium to communicate with other devices, such as a memory bus interface, a processor bus interface, an internet connection, a disk controller, etc. The communication interface may be configured by providing configuration parameters and/or transmitting signals to prepare the communication interface to provide data signals describing the software content. The communication interface may be accessed by sending one or more commands or signals to the communication interface.
The computer-executable instructions of embodiments of the present disclosure may be organized into one or more computer-executable components or modules. Aspects of the disclosure may be implemented with any number and combination of such components or modules. For example, aspects of the disclosure are not limited to the specific computer-executable instructions or the specific components or modules illustrated in the figures and described herein. Other embodiments may include different computer-executable instructions or components having more or less functionality than illustrated and described herein.
The above description is intended to be illustrative and not restrictive. For example, the above-described examples (or one or more versions thereof) may be used in combination with each other. For example, other embodiments may be used by those of ordinary skill in the art upon reading the above description. In addition, in the foregoing detailed description, various features may be grouped together to streamline the disclosure. This should not be interpreted as an intention that a disclosed feature not claimed is essential to any claim. Rather, the subject matter of the present disclosure may lie in less than all features of a particular disclosed embodiment. Thus, the following claims are hereby incorporated into the detailed description as examples or embodiments, with each claim standing on its own as a separate embodiment, and it is contemplated that these embodiments may be combined with each other in various combinations or permutations. The scope of the disclosure should be determined with reference to the appended claims, along with the full scope of equivalents to which such claims are entitled.
The above embodiments are merely exemplary embodiments of the present disclosure, which is not intended to limit the present disclosure, and the scope of the present disclosure is defined by the claims. Various modifications and equivalents of the disclosure may occur to those skilled in the art within the spirit and scope of the disclosure, and such modifications and equivalents are considered to be within the scope of the disclosure.

Claims (10)

1. A method for processing an image, the method comprising:
acquiring a set of localization points of an object of interest in a first set of images, the first set of images being usable for reconstructing a 3D image comprising the object of interest;
initializing a packet network based on a set of anchor points of the object of interest;
performing iterative computation on the network packet to obtain a target network packet, so that the target network packet wraps a first preset proportion of the group of positioning points, and each positioning point of a second preset proportion in the group of positioning points is attached to at least one point on the target network packet;
determining pixels corresponding to the points on the target envelope network in the first group of images, and determining the intensity values of the points on the target envelope network based on the intensity values of the areas where the corresponding pixels in the first group of images are located;
obtaining a second image based on a mapping relation between the target envelope network and the second image and the intensity value of each point on the target envelope network, wherein the second image is a 2D image.
2. The method of processing an image according to claim 1, wherein the object of interest comprises a rib and the first set of images comprises a set of slice images.
3. The method of processing an image according to claim 1, wherein initializing a network based on a set of anchor points of the object of interest comprises:
determining a central axis based on the location points of the object of interest;
establishing a first circle by taking each point on the central shaft as a circle center, wherein the radius of the first circle is determined based on positioning points in the corresponding tangent plane, so that each positioning point in the corresponding tangent plane is positioned in the first circle;
setting a plurality of initial points on the first circle;
connecting adjacent initiation points.
4. The method of processing an image according to claim 1, wherein iteratively computing the packet network comprises:
determining a relative offset vector based on points on the envelope network and a set of anchor points of the object of interest;
shifting each point on the envelope network to each positioning point of the attention object by using the relative shift vector;
and when the iteration is carried out until the offset distance of each point on the envelope network is smaller than a preset threshold value, obtaining the target envelope network.
5. The method of processing an image according to claim 4, wherein determining a relative offset vector based on points on the envelope network and a set of anchor points of the object of interest comprises:
determining a first relative offset vector based on a point on the packet network and a location point of the object of interest having a minimum distance to the point;
determining a center of gravity based on a point on the packet network and points adjacent to the point, determining a second relative offset vector based on the center of gravity and the point on the packet network;
superimposing the first relative offset vector and the second relative offset vector to obtain the relative offset vector.
6. The method of processing an image according to claim 4, wherein the shifting each point on the envelope network to each anchor point of the object of interest using the relative shift vector comprises:
shifting a point on the packet network to a positioning point with the minimum distance from the point in the group of positioning points by using the relative shift vector to obtain a second packet network;
repeating the iterative calculation in a case where an offset distance of a point on the second packet network is greater than or equal to a preset threshold value.
7. The method of processing an image according to claim 1, wherein determining pixels in the first set of images that correspond to points on the target envelope network comprises:
interpolating each pixel in the first set of images to obtain a pixel corresponding to a point on the target packet network.
8. The method of claim 1, wherein determining the intensity value of the point on the target packet network based on the intensity value of the region in which the corresponding pixel in the first set of images is located comprises:
giving corresponding weight to the intensity value of each pixel in the area where the corresponding pixel is located;
determining an intensity value for a point on the target packet network based on the weighted intensity values for the respective pixels.
9. An apparatus for processing an image, the apparatus comprising:
a communication interface configured to receive a first set of images, the first set of images including an object of interest and the first set of images being usable to reconstruct a 3D image including the object of interest;
a memory having computer-executable instructions stored thereon; and
a processor which, when executing the computer executable instructions, carries out a method of processing an image according to any one of claims 1 to 8.
10. A non-transitory computer-readable storage medium having stored thereon computer-executable instructions which, when executed by a processor, implement a method of processing an image according to any one of claims 1-8.
CN201910419074.1A 2019-05-20 2019-05-20 Image processing method, processing device and computer readable storage medium Active CN111968069B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910419074.1A CN111968069B (en) 2019-05-20 2019-05-20 Image processing method, processing device and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910419074.1A CN111968069B (en) 2019-05-20 2019-05-20 Image processing method, processing device and computer readable storage medium

Publications (2)

Publication Number Publication Date
CN111968069A true CN111968069A (en) 2020-11-20
CN111968069B CN111968069B (en) 2023-06-27

Family

ID=73358170

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910419074.1A Active CN111968069B (en) 2019-05-20 2019-05-20 Image processing method, processing device and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN111968069B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6226542B1 (en) * 1998-07-24 2001-05-01 Biosense, Inc. Three-dimensional reconstruction of intrabody organs
US20130230136A1 (en) * 2011-08-25 2013-09-05 Toshiba Medical Systems Corporation Medical image display apparatus and x-ray diagnosis apparatus
CN106548447A (en) * 2016-11-22 2017-03-29 青岛海信医疗设备股份有限公司 Obtain the method and device of medical science two dimensional image
CN106683090A (en) * 2016-12-31 2017-05-17 上海联影医疗科技有限公司 Rib positioning method in medical image and system thereof
CN108320314A (en) * 2017-12-29 2018-07-24 北京优视魔方科技有限公司 A kind of image processing method and device based on the cross-section images of CT, Bone images display system

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6226542B1 (en) * 1998-07-24 2001-05-01 Biosense, Inc. Three-dimensional reconstruction of intrabody organs
US20130230136A1 (en) * 2011-08-25 2013-09-05 Toshiba Medical Systems Corporation Medical image display apparatus and x-ray diagnosis apparatus
CN106548447A (en) * 2016-11-22 2017-03-29 青岛海信医疗设备股份有限公司 Obtain the method and device of medical science two dimensional image
CN106683090A (en) * 2016-12-31 2017-05-17 上海联影医疗科技有限公司 Rib positioning method in medical image and system thereof
CN108320314A (en) * 2017-12-29 2018-07-24 北京优视魔方科技有限公司 A kind of image processing method and device based on the cross-section images of CT, Bone images display system

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
RINGL H等: "The ribs unfolded-a CT visualization algorithm for fast detection of rib fractures: effect on sensitivity and specificity in trauma patients", EUROPEAN RADIOLOGY *
SOWMYA RAMAKRISHNAN: "Automatic three-dimensional rib centerline extraction from CT scans for enhanced visualization and anatomical context", MEDICAL IMAGING 2011: IMAGE PROCESSING *
张顺利等: "基于最小圆柱区域的锥束CT快速图像重建", 计算机科学, no. 05 *
裴政: "胸部DR图像分割在纹理检索中的研究及应用", 中国硕士学位论文全文库 *

Also Published As

Publication number Publication date
CN111968069B (en) 2023-06-27

Similar Documents

Publication Publication Date Title
JP7248832B2 (en) System and method for image processing
US10692272B2 (en) System and method for removing voxel image data from being rendered according to a cutting region
US7242401B2 (en) System and method for fast volume rendering
US8939892B2 (en) Endoscopic image processing device, method and program
JP5225999B2 (en) Combined intensity projection
EP3312701B1 (en) Method and apparatus for foveated rendering
US20050237336A1 (en) Method and system for multi-object volumetric data visualization
JP2007537770A (en) A dynamic crop box determination method for display optimization of luminal structures in endoscopic images
US8977020B2 (en) Image processing device and image processing method
CN110807770A (en) Medical image processing, recognizing and displaying method and storage medium
US9192339B2 (en) Scanning system and image display method
US11393139B2 (en) System and method for MPR streak reduction
US20220343589A1 (en) System and method for image processing
MX2014000639A (en) Method and system for performing rendering.
CN111968069B (en) Image processing method, processing device and computer readable storage medium
JP7247577B2 (en) 3D reconstructed image display device, 3D reconstructed image display method, program, and image generation method
CN111968728B (en) Image processing method and processing equipment
JP7013849B2 (en) Computer program, image processing device and image processing method
JP7003635B2 (en) Computer program, image processing device and image processing method
JP2019145015A (en) Computer program, image processor and image processing method
JP6443574B1 (en) Ray casting program, search control data, search control data generation method, and ray casting apparatus
CN114299096A (en) Outline delineation method, device, equipment and storage medium
JP7131080B2 (en) volume rendering device
WO2021213664A1 (en) Filtering for rendering
JP5245811B2 (en) Voxel array visualization device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant