WO2014049346A1 - Image filtering - Google Patents

Image filtering Download PDF

Info

Publication number
WO2014049346A1
WO2014049346A1 PCT/GB2013/052495 GB2013052495W WO2014049346A1 WO 2014049346 A1 WO2014049346 A1 WO 2014049346A1 GB 2013052495 W GB2013052495 W GB 2013052495W WO 2014049346 A1 WO2014049346 A1 WO 2014049346A1
Authority
WO
WIPO (PCT)
Prior art keywords
image data
samples
points
streamline
vector
Prior art date
Application number
PCT/GB2013/052495
Other languages
French (fr)
Inventor
Veronika Solteszova
Ivan VIOLA
Original Assignee
Bergen Teknologioverføring As
Wilson, Timothy James
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority to GBGB1217225.0A priority Critical patent/GB201217225D0/en
Priority to GB1217225.0 priority
Application filed by Bergen Teknologioverføring As, Wilson, Timothy James filed Critical Bergen Teknologioverføring As
Publication of WO2014049346A1 publication Critical patent/WO2014049346A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/001Image restoration
    • G06T5/002Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration by the use of local operators
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10132Ultrasound image
    • G06T2207/101363D ultrasound image

Abstract

Image data (1) is filtered by determining, for each of a first plurality of points (4, 5) within the image data, which of a plurality of directions is the direction along which a line of samples containing the point has the least variation according to a predetermined measure of variation. This generates data representing a vector field. For each of a second plurality of points (4, 5) within the image data, (i) the vector field is integrated to determine a streamline or streamline segment from the point, and (ii) a filtered value for the point is determined by applying a filtering operation to the image data using a filtering kernel oriented along the streamline or streamline segment.

Description

Image filtering
This invention relates to methods and apparatus for filtering image data, including, but not limited to, three-dimensional medical ultrasonography data.
Image data can be acquired in a variety of different way, such as from digital cameras, medical ultrasound scanners, underwater sonar, etc. Sampled image data typically represents a scene containing objects having edges or boundaries, such as blood vessels. Image data commonly comprises an array of sampled intensity values, known as pixels (in two dimensions) or voxels (in three
dimensions).
It is known to filter two-dimensional and three-dimensional image data in order to remove unwanted features. Filtering can aid subsequent interpretation of the image data by a human or by a computer algorithm. Smoothing filters reduce noise or other fine-scale structures. This can be particularly beneficial in three-dimensional (3D) medical ultrasonography, where the presence of random and structured noise in the image data can significantly hinder the visual reconstruction of imaged structures during volume rendering operations. Traditionally, however, smoothing filters also blur edges or boundaries between objects or regions in the image data; this can impair subsequent processing and interpretation of the smoothed data.
Some edge-preserving, smoothing filtering techniques are already known.
"Flow-Based Image Abstraction" by H. Kang et al., IEEE Transactions on
Visualization and Computer Graphics, Vol. 15, No. 1. (January 2009), pp. 62-76, describes a method of smoothing regions within a photograph. It uses the image gradient to construct a feature-preserving vector field. This vector field is then iteratively smoothed such that salient edge directions are preserved while weak edges are redirected to follow the neighbouring dominant edges. A linear bilateral Gaussian smoothing operation can be applied to the photograph, along flow curves in the vector field. This operation is said to protect and clean up shape boundaries. Such an approach may be relatively complex and may not be straightforward to implement. It is also limited to use with two-dimensional images. Different filters can perform better or worse than each other depending on the specific context in which they are applied and the particular outcome that is desired. It is therefore generally desirable to create new filters, as these may provide performance advantages over known filters in the appropriate contexts. The present invention seeks to provide a new filtering approach which is relatively straightforward to implement.
From a first aspect, the invention provides a method for filtering image data, the method comprising:
- for each of a first plurality of points within the image data, determining which of a plurality of directions is the direction along which a line of samples containing the point has the least variation according to a predetermined measure of variation, and thereby generating data representing a vector field; and
- for each of a second plurality of points within the image data, (i) integrating the vector field to determine a streamline or streamline segment from the point, and (ii) determining a filtered value for the point by applying a filtering operation to the image data using a filtering kernel oriented along the streamline or streamline segment.
From a second aspect, the invention provides apparatus comprising processing means or logic configured:
- to receive image data;
- for each of a first plurality of points within the image data, to determine
which of a plurality of directions is the direction along which a line of samples containing the point has the least variation according to a predetermined measure of variation, and thereby generating data representing a vector field; and
- for each of a second plurality of points within the image data, (i) to integrate the vector field to determine a streamline or streamline segment from the point, and (ii) to determine a filtered value for the point by applying a filtering operation to the image data using a filtering kernel oriented along the streamline or streamline segment. From a third aspect, the invention provides software, or a signal or tangible medium bearing the same, comprising instructions which, when executed by processing means, cause the processing means:
- for each of a first plurality of points within image data, to determine which of a plurality of directions is the direction along which a line of samples containing the point has the least variation according to a predetermined measure of variation, and thereby generating data representing a vector field; and
- for each of a second plurality of points within the image data, (i) to integrate the vector field to determine a streamline or streamline segment from the point, and (ii) to determine a filtered value for the point by applying a filtering operation to the image data using a filtering kernel oriented along the streamline or streamline segment.
Thus it will be seen by those skilled in the art that, in accordance with the invention, a filtering operation, such as a smoothing operation, follow paths of least variation within the data. Such paths are unlikely to cross boundaries between different regions within the image data, since such a crossing would typically be in a direction of relatively high variability. The filtering operation is therefore unlikely significantly to distort boundaries or edges within the image. Embodiments of the invention have been found to be particularly useful for filtering noise from certain images.
The operation of determining which of a plurality of directions is the direction along which a line of samples containing a point has the least variation can be
implemented relatively straightforwardly. It can also be computed relatively efficiently, even in three dimensions. This is especially important in situations where near real-time filtering is desirable, e.g. when processing medical ultrasonography data. The image data may have any number of dimensions, but will typically be two- or three-dimensional. In some preferred embodiments the image data is three- dimensional. The image data may comprise regularly or irregularly spaced samples; e.g. pixels or voxels. The samples may represent light intensity values, or tissue density (e.g. from ultrasound or x-ray signals), or any other appropriate measurand. The image data may be monochrome or may contain colour information.
The applicant has recognised that the invention can be particularly well suited to filtering 3D ultrasonography data, due to the efficiency of the approach. In one set of preferred embodiments, therefore, the image data comprises ultrasonography data; preferably three-dimensional ultrasonography data. The apparatus may comprise an ultrasound scanner. Embodiments of the method may comprise receiving or collecting ultrasonography image data.
This is not essential, however, and in other embodiments the image data may be magnetic resonance imaging (MRI) data, or a digital photograph, e.g. taken with an optical camera. The apparatus may form part of a digital camera, for instance. The filtered values may be used to generate a filtered image. Filtered image data may be displayed, e.g. on a display screen (potentially after further processing of the filtered data), or may be transmitted over a data connection, or may be processed further, or any combination of these. A volume rendering operation may be applied to the filtered data; e.g. to render a three-dimensional surface on a two- dimensional display screen.
The filtering may be applied to successive sets of sampled image data (which may be two- or three-dimensional) to generate filtered video. The filtering may happen in real-time or near real-time, where sufficient processing power is available.
The first plurality of points may comprise the positions of some or all the samples in the image data (e.g. with one point for each pixel or voxel), although this need not necessarily be the case. Some of the points may, for instance, be intermediate positions between pixels or voxels. The same holds for the second plurality of points. The second plurality of points is preferably the same as the first plurality of points, although this need not necessarily be so.
The plurality of directions may be a predetermined set of directions, e.g. defined relative to the image data. For instance, for a two-dimensional image, the directions might include some or all of: up, left, right, diagonally up-and-right, and diagonally up-and-left. For a three-dimensional image, the plurality of directions preferably do not all lie in the same plane. The plurality of directions may differ from one point to the next, but they are preferably the same plurality for all of the first plurality of points. This can simplify the implementation and ensure consistent performance across the whole image. (The determined direction of least variation will, however, typically not be the same between all the points.)
The line of samples may extend from the point, but is preferably centred on the point; e.g. containing equally many samples in each direction from the point. In this case, the plurality of directions preferably includes only one direction from each pair of opposite directions; e.g. only positive y directions in an x-y or x-y-z Cartesian coordinate system, to avoid redundant calculation.
A line preferably passes precisely through its associated point, but it may instead be adjacent or near it (e.g. within one pixel's width or voxel's width of the point). The lines of samples in different directions from a point preferably all contain the same number of samples. The samples are preferably regularly-spaced along the line. The lines of samples are preferably all the same length across all of the first plurality of points. This length may correspond to a predetermined distance and/or to a predetermined number of samples; e.g. between two and forty samples, preferably between six and twenty samples; most preferably around ten or eleven samples. The number of samples to use may be received as an input parameter, e.g. from a user.
A line of samples may consist of values taken directly from the image data (e.g. successive pixel values along a line), but it may comprise at least some interpolated values, derived from the image data. For example, the samples along a horizontal line in a two-dimensional rectangular image could contain eleven consecutive pixel values, while the same length line segment on a diagonal at 30 degrees from horizontal would not typically contain as many pixel centres; in this latter case, a line of eleven samples may be determined by including some values which are interpolated from neighbouring pixel values. Such interpolation operations can be performed by graphics processing units (GPUs) in two and/or three dimensions.
The measure of variation may take any suitable form. It might, for instance, be the difference between the smallest and the largest value from among the samples. Alternatively, it might be the median absolute deviation of the samples, or the average absolute deviation from the mean, mode or median. In preferred embodiments, however, it is the variance of the image samples (i.e. the mean of the squared differences between each sample value and the mean of the sample values from the line of samples), or a function thereof. Using the variance has been found to give good results. It is reasonably quick to compute.
A measure of variation is preferably calculated for each line of samples along each of the directions, and the smallest of these determined (if two or more directions have the same value, an arbitrary selection might be made).
The vector field preferably consists of or comprises a direction vector for each of the first plurality of points; e.g. with one vector located at each point, in the determined direction of least variation. The vectors may all be assigned a common magnitude (e.g. unit length).
The vector field may be integrated in any appropriate manner to determine the streamlines or streamline segments. Some preferred embodiments employ the Runge-Kutta 4 integration scheme. The vector field is preferably integrated both forwards and backwards from each point in the second plurality of points; i.e.
integrating both the vector field and the inverted vector field.
In some embodiments, a streamline segment is calculated for a predetermined number of integration steps; e.g. between two and twenty steps. This may done in both the forwards and backwards directions. In some preferred embodiments, around five integration steps are performed from each point, optionally both forwards and backwards (e.g. giving a combined streamline segment of around ten or eleven integration steps, centred on the point). The number of integration steps to use may be received as an input parameter. Integration of a streamline or streamline segment is preferably stopped on detection of a singularity or loop or the edge of the image data.
The filtering operation is preferably a smoothing operation. In a particularly preferred set of embodiments, the filtering operation comprises calculating an average of a set of sample values comprising sample values along the streamline or streamline segment. Preferably all the samples along forwards and backwards streamline segments (preferably of equal length) from a point are averaged. Any suitable average may be used. In preferred embodiments, the arithmetic mean is used. The average value may be the filtered value for the point, or the filtered value may be a function of the average value. Such a smoothing operation is
computationally efficient and has been found to provide good results. Other filtering operations are possible, however, such as bilateral smoothing along the streamline or streamline segment. Determining a filtered value for a point in the second plurality of points may additionally comprises one or more further filtering operations, not necessarily oriented along the streamline or streamline segment. It may comprise applying a filtering operation to the image data using a filtering kernel oriented at an angle to the streamline or streamline segment, such as perpendicular or in an orthogonal plane to the streamline or streamline segment.
Processing means or logic may be configured to implement some or all steps of embodiments of the invention. This may take any suitable form. It may comprise one or more of: a central processing unit, a graphics processing unit, a
microcontroller, an ASIC and an FPGA. Processing may be carried out on a single device or may be shared across multiple devices in any appropriate way. For instance, one device may generate data representing a vector field and a different device (possibly remote from the first) may determine a streamline or streamline segment and/or apply a filtering operation. In some embodiments, sampled image data may be collected by a first device (e.g. an ultrasound scanner) and sent to a remote computer or server which applies the filtering operation.
Optional or preferred features of one aspect or embodiment described herein may, wherever appropriate, be applied to any other aspect or embodiment.
Certain preferred embodiments of the invention will now be described, by way of example only, with reference to the accompanying drawings, in which:
Figure 1 is a figurative cross-sectional drawing of part of an ultrasonography scan with annotations to illustrate finding directions of lowest variance according to embodiments of the invention;
Figure 2 is a figurative cross-sectional drawing of part of an ultrasonography scan with annotations to illustrate determining streamline segments according to embodiments of the invention;
Figure 3 is a flow diagram showing steps carried out by an embodiment of the invention;
Figure 4 is a figurative diagram showing apparatus embodying the invention being used on a patient;
Figure 5 is an unfiltered output from a 3D cardiac ultrasound (rendered here in simplified black-and-white); and
Figure 6 is the output from the 3D cardiac ultrasound filtered using an embodiment of the invention (rendered here in simplified black-and-white).
A particular difficulty in ultrasound imaging is the presence of various kinds of noise that impede the image interpretation. These kinds of can be categorised into two distinct categories: random and structured. Structured noise can be further categorised into subcategories such as acoustic scattering (speckle), shadowing and dropout, ultrasound images, most of these noise types can be tackled by examiner with substantial experience. Speckle noise can sometimes provide a useful motion cue in two-dimensional scanning and is may be intentionally retained.
However, the presence of noise, including speckle noise, during 3D ultrasound visualization can be very problematic. In 3D renderings, random and structured noise impedes the visual reconstruction of imaged structures, by occluding or modifying structures. Noise can also have a harmful, dominating effect in the calculation of local illumination.
Embodiments of the present invention have therefore been found to be particularly useful in the context of 3D ultrasonography visualization, where the goal is to eliminate all kinds of noise and give prominence to the wanted signal, without modifying the signal to such an extent that it is no longer diagnostically relevant. Finding a clear separation between signal and noise in this context is not trivial and cannot be handled by common linear and non-linear filters.
Figures 1 and 2 illustrate two main stages carried out by certain embodiments of the invention. Figures 1 and 2 show a cross-section through a three-dimensional data set, obtained by ultrasound scanning. Visible is a blood vessel 1 which has an interior 2 (e.g. containing blood) bounded by a vessel wall 3. In a first stage, a tangent direction is determined for each voxel P in the image volume. The aim here is to select the direction that has the highest probability of being tangent to a fictive surface going through P. The outcome is a 3D vector field. In a second stage, a short streamline is constructed, seeded at each voxel P, by integrating the vector field produce by the first stage. The streamline is then used to define the shape of a filtering operator mask during a subsequent filtering operation.
Figure 1 illustrates how the vector field can be constructed, focussing on two exemplary voxels or points 4, 5. The approach evaluates variance in patterns in local neighbourhoods around the points 4, 5. By considering a neighbourhood around each point 4, 5, ultrasound-inherent speckle noise can be effectively dealt with. The approach is based on the insight that values along a line segment entirely inside one tissue material will typically have lower variance of intensities than a line segment which is crossing several materials.
To find the orientation of a line segment with lowest variance for each point 4, 5, line segments are notionally aligned so as to be centred on each point, 4, 5, at each of a discreet set of directions (illustrated in Figure 1 by a set of line segments with arrow-heads). These directions are obtained by rotating an initial line segment in the XY and XZ plane around the point by an angle d. This assures a minimal angular sampling rate of d. Each line segment is defined by the position of the point 4, 5 and the direction vector x. Since vectors x and -x could effectively define the same line, in this embodiment x is consistently selected with a positive y- coordinate. The variance of a set of regularly-spaced samples for each of the line segments is then calculated. The spacing of the samples along the line may equal the voxel spacing. The sample set may include interpolated voxel values when a line segment is not parallel to the x-, y- or z-axis. The direction xmin which corresponds to the line segment with the lowest variance for each of points 4, 5 is highlighted in Figure 1 by a line of circular blobs (figuratively representing the samples along those lines). The minimum-variance directions xmin are determined for every voxel in the data set and are copied to an output 3D vector field at the position of the associated voxel.
Formally, xmin can be defined as:
Xrcm l Vx, Vor {f(P + kAx)} > Var {f(P + kAxmin)} ki~—n..n k£— n..n where Var{.} is the variance of a set of values, f(P) is the voxel intensity at point P, Δ is a positive step size and n indicates how many samples are taken along the line segment in each of the positive and negative senses.
In one embodiment, n = 5 has been used and has been found to give good results. Figure 2 illustrates how the vector field can be used to integrate a streamline, focussing on the exemplary voxel 4. Each vector represents the direction of the line segment with minimal variance. The streamline integration for a point or voxel is carried out in two steps: forward integration and backward integration. The forward integration constructs a part of the operator mask while integrating xmin from the underlying vector field. In Figure 2, this is illustrated by the two short arrows leading upwards and rightwards from voxel 4. Each arrow represents a successive integration step. The backward integration uses the inverted vector field (i.e. -xmin) to construct the second part of the operator mask. In Figure 2, this is represented by the two short arrows leading downwards and leftwards from voxel 4.
This is done for every voxel P in the image data in turn.
Both the backward- and the forward-integration parts are carried out using the Runge-Kutta 4 integration scheme. In this way, 2m+1 samples are obtained along a streamline centred on the voxel P, where m is the number of integration steps.
In one embodiment, a filtered value P' at each point P is then determined as the arithmetic mean of these samples:
Figure imgf000012_0001
where P,- is the th integration step of the streamline; /' > 0 is the forward integration; /' = 0 is the sample from the seed point; and /' < 0 is the backward integration. In one embodiment, good results have been obtained using m = 5.
Of course, in other embodiments, different filters may be applied along the streamlines.
Figure 3 illustrates these stages in a flow diagram. Image data is received 6; a vector field is generated for each voxel 7; the streamlines are calculated for each voxel 8; filtering is applied to each voxel using a filter aligned along the streamlines 9; and the filtered image data is output 10.
Figure 4 shows apparatus embodying the invention in use on a human patient 1 1. An ultrasound scanner handset 12 is directed towards the patient's heart. A processing unit 13 receives ultrasound reflection signals from the handset 12 for processing. In addition to the standard processing common to normal 3D ultrasonography, the processing unit 13 applies a filtering operation to the 3D image data as described herein. After the image data has been filtered, the processing unit 13 can use a volumetric rendering operation on the data to render a 3D view of the patient's heart on a display screen 14. Figures 5 and 6 illustrate results obtained using an embodiment of the invention. Figure 5 shows a 3D ultrasound view of a human heart extracted from a cardiac cycle without any special filtering used in the processing of the image data. Figure 6 shows the corresponding view where filtering of the image data has been applied by a method embodying the invention. The amount of speckle and noise was significantly decreased; for instance resulting in smoother walls of the myocardium.

Claims

Claims
1. A method for filtering image data, the method comprising:
for each of a first plurality of points within the image data, determining which of a plurality of directions is the direction along which a line of samples containing the point has the least variation according to a predetermined measure of variation, and thereby generating data representing a vector field; and
for each of a second plurality of points within the image data, (i) integrating the vector field to determine a streamline or streamline segment from the point, and (ii) determining a filtered value for the point by applying a filtering operation to the image data using a filtering kernel oriented along the streamline or streamline segment.
2. A method as claimed in claim 1 , wherein the image data is three- dimensional image data.
3. A method as claimed in claim 1 or 2, wherein the image data comprises ultrasonography data.
4. A method as claimed in any preceding claim, further comprising applying a volume rendering operation to the filtered data.
5. A method as claimed in any preceding claim, wherein one or each of the first plurality of points and the second plurality of points comprises all the sample positions in the image data.
6. A method as claimed in any preceding claim, wherein the plurality of directions are the same for all of the first plurality of points.
7. A method as claimed in any preceding claim, wherein each line of samples is centred on its respective point.
8. A method as claimed in any preceding claim, wherein, for at least one of the points, all the lines of samples contain the same number of samples for all of the plurality of directions.
9. A method as claimed in any preceding claim, wherein the lines of samples all contain between two and forty samples, and preferably around eleven samples.
10. A method as claimed in any preceding claim, wherein the measure of variation is the variance of the image samples or a function thereof.
1 1. A method as claimed in any preceding claim, wherein the vector field is integrated using the Runge-Kutta 4 integration scheme.
12. A method as claimed in any preceding claim, wherein the vector field is integrated both forwards and backwards from each of the second plurality of points.
13. A method as claimed in any preceding claim, comprising integrating the vector field for between one and twenty steps away from each of the second plurality of points, and preferably around five steps.
14. A method as claimed in any preceding claim, wherein the filtering operation comprises calculating an average of a set of sample values comprising sample values along the streamline or streamline segment.
15. Apparatus comprising processing means or logic configured:
to receive image data;
for each of a first plurality of points within the image data, to determine which of a plurality of directions is the direction along which a line of samples containing the point has the least variation according to a predetermined measure of variation, and thereby generating data representing a vector field; and
for each of a second plurality of points within the image data, (i) to integrate the vector field to determine a streamline or streamline segment from the point, and (ii) to determine a filtered value for the point by applying a filtering operation to the image data using a filtering kernel oriented along the streamline or streamline segment.
16. Apparatus as claimed in claim 15, wherein the image data is three- dimensional image data.
17. Apparatus as claimed in claim 15 or 16, wherein the apparatus further comprises an ultrasound scanner and the image data comprises ultrasonography data.
18. Apparatus as claimed in any of claims 15 to 17, wherein the processing means or logic is further configured to apply a volume rendering operation to the filtered data.
19. Apparatus as claimed in any of claims 15 to 18, further comprising a display and being configured to control the display in response to the filtered data.
20. Apparatus as claimed in any of claims 15 to 19, wherein one or each of the first plurality of points and the second plurality of points comprises all the sample positions in the image data.
21. Apparatus as claimed in any of claims 15 to 20, wherein the plurality of directions are the same for all of the first plurality of points.
22. Apparatus as claimed in any of claims 15 to 21 , wherein each line of samples is centred on its respective point.
23. Apparatus as claimed in any of claims 15 to 22, wherein, for at least one of the points, all the lines of samples contain the same number of samples for all of the plurality of directions.
24. Apparatus as claimed in any of claims 15 to 23, wherein the lines of samples all contain between two and forty samples, and preferably around eleven samples.
25. Apparatus as claimed in any of claims 15 to 24, wherein the measure of variation is the variance of the image samples or a function thereof.
26. Apparatus as claimed in any of claims 15 to 25, wherein the processing means of logic is configured to integrate the vector field using the Runge-Kutta 4 integration scheme.
27. Apparatus as claimed in any of claims 15 to 26, wherein the processing means of logic is configured to integrate the vector field both forwards and backwards from each of the second plurality of points.
28. Apparatus as claimed in any of claims 15 to 27, wherein the processing means of logic is configured to integrate the vector field for between one and twenty steps away from each of the second plurality of points, and preferably around five steps.
29. Apparatus as claimed in any of claims 15 to 28, wherein the filtering operation comprises calculating an average of a set of sample values comprising sample values along the streamline or streamline segment.
30. Software comprising instructions which, when executed by processing means, cause the processing means:
for each of a first plurality of points within image data, to determine which of a plurality of directions is the direction along which a line of samples containing the point has the least variation according to a predetermined measure of variation, and thereby generating data representing a vector field; and
for each of a second plurality of points within the image data, (i) to integrate the vector field to determine a streamline or streamline segment from the point, and (ii) to determine a filtered value for the point by applying a filtering operation to the image data using a filtering kernel oriented along the streamline or streamline segment.
PCT/GB2013/052495 2012-09-26 2013-09-24 Image filtering WO2014049346A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
GBGB1217225.0A GB201217225D0 (en) 2012-09-26 2012-09-26 Image filtering
GB1217225.0 2012-09-26

Publications (1)

Publication Number Publication Date
WO2014049346A1 true WO2014049346A1 (en) 2014-04-03

Family

ID=47190676

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/GB2013/052495 WO2014049346A1 (en) 2012-09-26 2013-09-24 Image filtering

Country Status (2)

Country Link
GB (1) GB201217225D0 (en)
WO (1) WO2014049346A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110827391A (en) * 2019-11-12 2020-02-21 腾讯科技(深圳)有限公司 Image rendering method, device and equipment and storage medium

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6757442B1 (en) * 2000-11-22 2004-06-29 Ge Medical Systems Global Technology Company, Llc Image enhancement method with simultaneous noise reduction, non-uniformity equalization, and contrast enhancement

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6757442B1 (en) * 2000-11-22 2004-06-29 Ge Medical Systems Global Technology Company, Llc Image enhancement method with simultaneous noise reduction, non-uniformity equalization, and contrast enhancement

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
ANONYMOUS: "Index of /vso001", 19 September 2012 (2012-09-19), XP055093260, Retrieved from the Internet <URL:http://folk.uib.no/vso001/> [retrieved on 20131213] *
ANONYMOUS: "Index of /vso001/dissertation", 19 September 2012 (2012-09-19), XP055093264, Retrieved from the Internet <URL:http://folk.uib.no/vso001/dissertation/> [retrieved on 20131213] *
DAVID TSCHUMPERLÉ: "Fast Anisotropic Smoothing of Multi-Valued Images using Curvature-Preserving PDE's", INTERNATIONAL JOURNAL OF COMPUTER VISION, KLUWER ACADEMIC PUBLISHERS, BO, vol. 68, no. 1, 1 March 2006 (2006-03-01), pages 65 - 82, XP019410111, ISSN: 1573-1405, DOI: 10.1007/S11263-006-5631-Z *
K-J JUNG ET AL: "Structural Tensor Imaging", PROCEEDINGS OF THE INTERNATIONAL SOCIETY FOR MAGNETIC RESONANCE IN MEDICINE, 18 May 2004 (2004-05-18), XP055093430 *
VERONIKA SOLTÉSZOVÁ: "Perception-Augmenting Illumination", 19 September 2012 (2012-09-19), pages 1 - 170, XP002717952, Retrieved from the Internet <URL:http://folk.uib.no/vso001/dissertation/diss-VS.pdf> [retrieved on 20131213] *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110827391A (en) * 2019-11-12 2020-02-21 腾讯科技(深圳)有限公司 Image rendering method, device and equipment and storage medium

Also Published As

Publication number Publication date
GB201217225D0 (en) 2012-11-07

Similar Documents

Publication Publication Date Title
JP6273291B2 (en) Image processing apparatus and method
JP2004246625A (en) Image processing apparatus, image processing method and program
JP2006075602A (en) Visualization method of plaque deposition comprising three-dimensional image data set of blood vessel structure
TWI676024B (en) System and method for combining 3d images in color
JP2006271971A (en) Volumetric image enhancement system and method
AU2013273657A1 (en) Ophthalmic reference image selection
Bhateja et al. An improved medical image fusion approach using PCA and complex wavelets
JP2017536547A (en) Speckle reduction in optical coherence tomography images
JP2003334194A (en) Image processing equipment and ultrasonic diagnostic equipment
JP2015089516A (en) Method for processing image data representing three-dimensional volume
JP6639973B2 (en) Ultrasound diagnostic apparatus, medical image processing apparatus, and medical image processing program
CN108898567A (en) Image denoising method, apparatus and system
Auzinger et al. Vessel visualization using curved surface reformation
KR20140109801A (en) Method and apparatus for enhancing quality of 3D image
Yu et al. Performance evaluation of edge-directed interpolation methods for noise-free images
WO2014049346A1 (en) Image filtering
JP2009247490A (en) Image processor, image processing method and program
WO2021005098A1 (en) Signal processing apparatus and method using local length scales for deblurring
CN105957135B (en) The method and system of 3D rendering filtering and movie real-time rendering for being rendered based on volume
Stoppel et al. Visibility-driven depth determination of surface patches in direct volume rendering
Awan et al. Spatial and spatio-temporal feature extraction from 4D echocardiography images
US20170287206A1 (en) Method and apparatus for processing three-dimensional image data
JP6752059B2 (en) Image processing equipment and image processing methods, programs and storage media
EP2175417B1 (en) Method of filtering an image dataset
Nasim et al. Investigating 3D echocardiography image fusion for improving image quality

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 13766653

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase in:

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 13766653

Country of ref document: EP

Kind code of ref document: A1