CN101635046A - Image processing method and device based on compute unified device architecture (CUDA) technology - Google Patents

Image processing method and device based on compute unified device architecture (CUDA) technology Download PDF

Info

Publication number
CN101635046A
CN101635046A CN200910013236A CN200910013236A CN101635046A CN 101635046 A CN101635046 A CN 101635046A CN 200910013236 A CN200910013236 A CN 200910013236A CN 200910013236 A CN200910013236 A CN 200910013236A CN 101635046 A CN101635046 A CN 101635046A
Authority
CN
China
Prior art keywords
gridding
window
coordinate
data
max
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN200910013236A
Other languages
Chinese (zh)
Other versions
CN101635046B (en
Inventor
杨金柱
赵大哲
冯朝路
栗伟
王艳飞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Neusoft Corp
Original Assignee
Neusoft Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Neusoft Corp filed Critical Neusoft Corp
Priority to CN2009100132368A priority Critical patent/CN101635046B/en
Publication of CN101635046A publication Critical patent/CN101635046A/en
Application granted granted Critical
Publication of CN101635046B publication Critical patent/CN101635046B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Processing (AREA)

Abstract

The invention discloses an image processing method and a device based on compute unified device architecture (CUDA) technology. The method comprises the following steps: obtaining basic data and determining a gridding result data scale according to an imaging resolution; obtaining a K space convolution window and the coordinates of all the elements in the window according to the gridding result data scale; obtaining the track values of the elements sequentially according to the coordinates of each element in the K space convolution window and applying the track value of each element and the track value of a K space center point to sequentially compute an Euclidean distance; if the obtained Euclidean distance is smaller than a first threshold value, combining the basic data to carry out convolution compute on sampling data in the basic data corresponding to the values of the elements and obtaining gridding compute result data; and carrying out inverse Fourier transformation on the gridding compute result data and obtaining image data. The invention well solves the write-write conflict of gridded data caused when a basic data compute convolution window carries out parallel processing based on CUDA.

Description

Image processing method and device based on unified calculation equipment framework technology
Technical field
The present invention relates to technical field of image processing, particularly based on image processing method and the device of unifiedly calculating equipment framework (CUDA, Compute Unified Device Architecture) technology.
Background technology
Magnetic resonance imaging (MRI) is a kind of biological magnetic spin imaging technique, and nuclear spin motion produces signal behind the radio-frequency pulse impulse in externally-applied magnetic field.By MRI data sampling technology these signals are detected with detector and the input computing machine; The MRI reconstruction technique carries out aftertreatment to signal and convert the image that shows on screen.
The MRI data acquisition is to carry out in the K space in spatial frequency domain.For with these K spatial sampling data-switching under two dimension or 3-D view space coordinates such as cartesian coordinate system, must carry out inverse-Fourier transform to data.But the inverse-Fourier transform algorithm of standard requires data uniform sampling in coordinate system uniformly-spaced, this just require the MRI reconstruction algorithm before the utilization inverse-Fourier transform with sampled data from the K spatial alternation to uniformly-spaced coordinate system such as cartesian coordinate system under to realize uniform sampling.The process of this conversion normally realizes by GRIDDING WITH WEIGHTED AVERAGE.
GRIDDING WITH WEIGHTED AVERAGE by data resampling technology with the K Conversion of Spatial Data of unequal interval sampling under cartesian coordinate system, it is the K spatial division equally spaced rectilinear grid, compose a value for the center of each grid cell, this value equal to drop in this grid cell total data it " with ".Here " with " be not the simple addition of total data in the grid cell, but a kind of convolution interpolation arithmetic.
GRIDDING WITH WEIGHTED AVERAGE is actually a kind of convolution interpolation arithmetic, each sample that collects all carries out convolution with a diffusion kernel, the energy of this sample also just is being distributed on the adjacent net point after the convolution, is equivalent to like this to resample on the nigh net point of this sample value.The advantage of this convolution interpolation algorithm is to have used whole sampled datas in the interpolation process.At present, the Kaiser-Bessel window function has become the convolution function that generally adopts in the MRI GRIDDING WITH WEIGHTED AVERAGE.
Calculate unified equipment framework (CUDA, Compute Unified Device Architecture) is picture processing chip (GPU) framework of a new generation, calculating can be carried out multithreading Task Distribution and management, can bring into play the advantage of GPU dramatically, calculating is carried out hardware-accelerated to large-scale parallel very easily.
Existing, GRIDDING WITH WEIGHTED AVERAGE is formulated as:
M RS = { [ M ( u , v ) · ( S ( u , v ) S ( u , v ) * C ( u , v ) ) ] * C ( u , v ) } · R ( u , v ) - - - ( 2 - 1 )
(u v) is the magnetic resonance samples data to M in the formula, M RSBe the data after resampling, S (u v) is the sampling function of unequal interval, R (u v) represents uniformly-spaced rectilinear grid sampling function, C (u v) is convolution function (a Kaiser-Bessel function), the representing matrix multiplication, and * represents convolution algorithm,
Figure G2009100132368D00022
Be actually nonuniform sampling density compensation function.Unequal interval sampling function S (u v) is a plural number, and its real part represents that with x imaginary part represents that with y computing formula is shown in (2-2)-(2-3):
S ( u , v ) . x = - corX 2 + corX SampSizw * u - - - ( 2 - 2 )
S ( u , v ) . y = - corY 2 + corY ScanMatrix * ( ScanMatrix - ETL 2 + v ) - - - ( 2 - 3 )
Wherein corX, corY represent Cartesian coordinates X, Y-axis to normalization length respectively, and SampSize represents sampling number, and ScanMatrix represents the sampled scan matrix size, and ETL represents echo train length, 0≤u<SampSize, 0≤v<ETL.
K spatial convoluted window size [kwidth.x, kwidth.y] computing formula is shown in (2-4)-(2-5):
kwidth.x=k?max.x*convwidth.x/gridsize.x (2-4)
kwidth.y=k?max.y*convwidth.y/gridsize.y (2-5)
Wherein, (gidsize.x, gridsize.y) expression gridding result data scale is the size of gridding result data matrix, (convwidth.x, convwidth.y) presentation video spatial convoluted window size, (k max.x, k max.y) represents K space normalization size.
When K spatial convoluted window size is [kwidth.x, kwidth.y], sample track point S (u, the computing formula of v) corresponding gridding matrix of consequence convolution window coordinate range [(ix min, iy min), (ix max, iy max)] is shown in (2-6)-(2-9):
ix min = ( int ) ( ( S ( u , v ) . x - kwidth . x ) * gridsize . x k max . x + gridcenter . x ) - - - ( 2 - 6 )
if(ix?min<0)ix?min=0;
ix max = ( int ) ( ( S ( u , v ) . x + kwidth . x ) * gridsize . x k max . x + gridcenter . x ) + 1 - - - ( 2 - 7 )
if(ix?max≥gridsize.x)ix?max=gridsize.x-1;
iy min = ( int ) ( ( S ( u , v ) . y - kwidth . y ) * gridsize . y k max . y + gridcenter . y ) - - - ( 2 - 8 )
if(iy?min<0)iy?min=0;
iy max = ( int ) ( ( S ( u , v ) . y + kwidth . y ) * gridsize . y k max . y + gridcenter . y ) + 1 - - - ( 2 - 9 )
if(iy?max≥gridsize.y)iy?max=gridsize.y-1;
In the above-mentioned formula, min, max are for asking minimum, peaked function, (gidcenter.x, gridcenter.y) represent gridding center position as a result, (convwidth.x, convwidth.y) presentation video spatial convoluted window size, (kwidth.x, kwidth.y) expression K spatial convoluted window size, remaining variables as hereinbefore, no longer repeat specification.
The point in the computing grid matrix of consequence convolution coordinate window [(ix min, iy min), (ix max, iymax)] and the K space Euclidean distance of window center point, computing formula is shown in (2-10)-(2-12)::
dkx = k max . x * grid ( p , q ) . x - gridcenter . x gridsize . x - S ( u , v ) . x - - - ( 2 - 10 )
dky = k max . y * grid ( p , q ) . y - gridcenter . y gridsize . y - S ( u , v ) . y - - - ( 2 - 11 )
dk = dkx 2 + dky 2 - - - ( 2 - 12 )
Wherein, ((((p, gridded data q), dk represent the K space Euclidean distance of window interior element apart from the convolution window central point to grid in grid (p, q) .y) representative for p, q) .x for p, q) ∈ [(x min, y min), (x max, y max)]; Right dk < ( kwidth . x ) 2 + ( kwidth . y ) 2 The gridding convolution window in point, (u, v) Dui Ying sampled data points is carried out convolution algorithm to sample track S.
Represent the sample track columns with dataw, datah represents the sample track line number, grid (p, q) expression (the main process of GRIDDING WITH WEIGHTED AVERAGE is for p, q) dot grid result:
(1) pre-service: S (u, v) sample track generation, M (u, v) data sampling
(2) make i=0 to datah, add 1 at every turn, that is, and for i=0 → datah
(3) make j=0 to dataw, add 1 at every turn, that is, and for j=0 → dataw
(4) utilize formula (2-6) to (2-9) calculating sampling tracing point (i, K spatial convoluted window size [kwidth.x, kwidth.y] j) is transformed into the corresponding gridding matrix coordinate window [(ix min, iy min), (ix max, iy max)] of image space;
(5) make p=iy min → iy max, for example, for p=iy min → iy max
(6) make q=ix min → ix max, for example, for q=ix min → ix max
(7) utilize formula (2-10) to calculate dk to (2-12), if dk &GreaterEqual; ( kwidth . x ) 2 + ( kwidth . y ) 2 Change (5);
(8) (p q) carries out convolution algorithm, obtains the gridding matrix of consequence to result data grid to utilize formula (2-1).
Fig. 1 is a networked algorithm convolution window calculation principle schematic in the prior art.As can be seen from Figure 1, the convolution window calculates by track, promptly calculates convolution window I by track 1, calculates convolution window II by track 2.The CUDA technology can be carried out multithreading Task Distribution and management, realize the networked algorithm of prior art by CUDA, promptly GRIDDING WITH WEIGHTED AVERAGE is carried out parallel processing, the situation that the lap of above-mentioned convolution window I and convolution window II exists by two batch totals and calculate (to the calculating of track 1 with to the calculating of track 2) to write simultaneously, promptly in the main process (4) of GRIDDING WITH WEIGHTED AVERAGE, there is crossover phenomenon in computing to the convolution window, causes GRIDDING WITH WEIGHTED AVERAGE to exist thus and writes-write conflict.
That is to say, because the CUDA technology is to the GRIDDING WITH WEIGHTED AVERAGE parallel processing, thereby be that the computation process of primary data is finished by multi-threaded parallel with above-mentioned steps (2) and (3), like this, can cause same group of input data, each execution result all there are differences (lap by window I and window II causes) after using identical algorithm, and this species diversity is without any regularity.For example, table 1 is based on the GRIDDING WITH WEIGHTED AVERAGE execution environment of CUDA, and table 2 is that GRIDDING WITH WEIGHTED AVERAGE exists under execution environment shown in the table 1 and writes-representative data of write conflict: the coordinate position of first row " positional information " the expression gridding point; Secondary series is CPU gridding result; Third and fourth row are based on the GRIDDING WITH WEIGHTED AVERAGE operation result of CUDA; From table oval labeled data as seen, all inconsistent based on the each execution result of the GRIDDING WITH WEIGHTED AVERAGE result of CUDA, and without any rule.
Table 1
Figure G2009100132368D00051
Table 2
Figure G2009100132368D00052
As seen, GRIDDING WITH WEIGHTED AVERAGE execution result based on the CUDA technology under the situation identical at execution environment, that input parameter is identical is unsettled, be that each execution result all has certain difference, and this difference is without any rule, thereby also be at every turn all inconsistent to the image that causes this arithmetic result to be carried out obtaining behind the inverse-Fourier transform, and, all be inaccurate thereby cause follow-up this treatment of picture result that depends on without any rule.
Summary of the invention
The embodiment of the invention is to provide a kind of image processing method and device based on unified calculation equipment framework technology, with writing-write conflict of avoiding existing in the parallel processing, feasible GRIDDING WITH WEIGHTED AVERAGE execution result based on the CUDA technology is stable, thereby guarantees finally to obtain on all four image at every turn.
The application provides a kind of image processing method based on unified calculation equipment framework technology, comprising:
Obtain basic data, determine gridding result data scale according to imaging resolution;
Obtain the coordinate of all elements in K spatial convoluted window and the window according to described gridding result data scale;
Obtain the track value of this element successively according to the coordinate of each element in the described K spatial convoluted window, the track value of using described each element successively with the track value compute euclidian distances of K space center's point;
If the Euclidean distance that is obtained is then carried out convolutional calculation in conjunction with described basic data to the sampled data in the pairing basic data of this element value less than first threshold, obtain the gridding calculation result data;
Described gridding calculation result data is carried out inversefouriertransform, obtain view data.
Wherein, described basic data comprises: sampled data, density compensation data, sample track data and convolution kernel data.
Wherein, the step that obtains the coordinate of all elements in K spatial convoluted window and the window according to described gridding result data scale comprises: according to gridding result data scale, create and carry out kernel function, carry out following steps by described execution kernel function:
According to gridding result data scale, obtain the middle point coordinate of gridding result data;
According to four summits of described mid point coordinate Calculation K spatial convoluted window track value;
Calculate the coordinate on four summits in the window according to described K spatial convoluted window summit track value,
Determine the coordinate of all elements in the window according to the coordinate on four summits.
Wherein, the step according to four summits of described mid point coordinate Calculation K spatial convoluted window track value comprises:
x min = grid . x - gridcenter . x - convwidth . x gridsize . x * k max . x
x max = grid . x - gridcenter . x + convwidth . x gridsize . x * k max . x
y min = grid . y - gridcenter . y - convwidth . y gridsize . y * k max . y
y max = grid . y - gridcenter . y + convwidth . y gridsize . y * k max . y
Wherein, grid.x represents the directions X coordinate figure of current gridding point; Grid.y represents the Y direction coordinate figure of current gridding point; Gridsize.x represents gridding directions X size as a result; Gridsize.y represents gridding Y direction size as a result; Gridcenter.x represents gridding central point directions X coordinate figure as a result; Gridcenter.y represents gridding central point Y direction coordinate figure as a result; Convwidth.x presentation video spatial convoluted window directions X size; Convwidth.y presentation video spatial convoluted window Y direction size; Kmax.x represents K space X direction maximal value; Kmax.y represents K space Y direction maximal value.
Wherein, calculate the coordinate on four summits in the window according to described K spatial convoluted window summit track value:
u min = ( x min + corX 2 ) * SampSize corX
u max = ( x max + corX 2 ) * SampSize corX
v min = ( y min + corY 2 ) * ScanMatrix corY - ScanMatrix - ETL 2
v max = ( y max + corY 2 ) * ScanMatrix corY - ScanMatrix - ETL 2
Wherein, corX, corY represent Cartesian coordinates X, Y-axis to normalization length respectively, and SampSize represents sampling number, and ScanMatrix represents the sampled scan matrix size, and ETL represents echo train length, 0≤u<SampSize, 0≤v<ETL.
Wherein, the track value of using described each element comprises with the step of the track value compute euclidian distances of K space center's point successively:
I) calculate the track value of an element and the track value compute euclidian distances dk of K space center's point, be specially:
dkx = k max . x * grid . x - gridcenter . x gridsize . x - S ( p , q ) . x
dky = k max . y * grid . y - gridcenter . y gridsize . y - S ( p , q ) . y
dk = dkx 2 + dky 2
Wherein, (p, q) ∈ [(x min, y min), (x max, y max)], S (p, q) (u is v) in that (p, functional value q), dk represent the K space Euclidean distance of window interior element apart from the convolution window central point to expression sampling function S; Grid.x represents the directions X coordinate figure of current gridding point; Grid.y represents the Y direction coordinate figure of current gridding point; Gridsize.x represents gridding directions X size as a result; Gridsize.y represents gridding Y direction size as a result; Gridcenter.x represents gridding central point directions X coordinate figure as a result; Gridcenter.y represents gridding central point Y direction coordinate figure as a result; K max.x represents K space X direction maximal value; K max.y represents K space Y direction maximal value;
Ii) repeating step i), calculate the track value of each element and the track value compute euclidian distances dk of K space center's point successively.
Wherein, described first threshold d is:
d = ( convwidth . x gridsize . x * k max . x ) 2 + ( convwidth . y gridsize , y * k max . y ) 2
Wherein, convwidth.x presentation video spatial convoluted window directions X size; Convwidth.y presentation video spatial convoluted window Y direction size, gridsize.x represents gridding directions X size as a result; Gridsize.y represents gridding Y direction size as a result, and k max.x represents K space X direction maximal value; K max.y represents K space Y direction maximal value.
The application also provides a kind of image processing apparatus based on unified calculation equipment framework technology, comprising:
Acquisition module is used to obtain basic data;
The data scale determination module is used for determining gridding result data scale according to imaging resolution;
The coordinate Calculation module is used for the coordinate according to all elements in described gridding result data scale acquisition K spatial convoluted window and the window;
Trajectory computation module is used for obtaining according to the coordinate of each element in the described K spatial convoluted window successively the track value of this element;
Oldham distance calculating module, the track value that is used to use described each element successively with the track value compute euclidian distances of K space center's point;
The gridding computing module is used for during less than first threshold, in conjunction with described basic data the sampled data in the pairing basic data of this element value being carried out convolutional calculation in the Euclidean distance that is obtained, and obtains the gridding calculation result data;
The view data computing module is used for described gridding calculation result data is carried out inversefouriertransform, obtains view data.
Wherein, described basic data comprises: sampled data, density compensation data, sample track data and convolution kernel data.
Wherein, described coordinate Calculation module comprises:
The center point coordinate computing module is used for according to gridding result data scale, obtains the middle point coordinate of gridding result data;
The summit trajectory computation module is used for according to four summits of described mid point coordinate Calculation K spatial convoluted window track value;
The apex coordinate computing module is used for the coordinate according to four summits in the described K spatial convoluted window summit track value calculating window;
Window interior element coordinate Calculation module is used for determining according to the coordinate on four summits the coordinate of all elements in the window.
The embodiment of the invention has well solved GRIDDING WITH WEIGHTED AVERAGE based on CUDA by writing-write conflict that parallel processing caused.CUDA realize result and CPU realization result only behind the decimal strong point 2-6 position have error; When initial conditions was consistent, execution result also was unique, thereby made that the GRIDDING WITH WEIGHTED AVERAGE execution result based on the CUDA technology is stable, finally obtained on all four image under the situation that guarantees the initial conditions unanimity.
Description of drawings
In order to be illustrated more clearly in the embodiment of the invention or technical scheme of the prior art, to do to introduce simply to the accompanying drawing of required use in embodiment or the description of the Prior Art below, apparently, accompanying drawing in describing below only is some embodiments of the present invention, for those of ordinary skills, under the prerequisite of not paying creative work, can also obtain other accompanying drawing according to these accompanying drawings.
Fig. 1 is a networked algorithm convolution window calculation principle schematic in the prior art;
Fig. 2 calculates principle schematic according to the GRIDDING WITH WEIGHTED AVERAGE convolution window of the embodiment of the invention;
Fig. 3 is the image processing method process flow diagram based on the CUDA technology according to the embodiment of the invention;
Fig. 4 is the image processing apparatus structural representation based on the CUDA technology according to the embodiment of the invention;
Fig. 5 is CPU GRIDDING WITH WEIGHTED AVERAGE imaging results figure;
Fig. 6 is the GRIDDING WITH WEIGHTED AVERAGE imaging results figure based on CUDA that uses the embodiment of the invention.
Embodiment
Below in conjunction with the accompanying drawing in the embodiment of the invention, the technical scheme in the embodiment of the invention is clearly and completely described, obviously, described embodiment only is the present invention's part embodiment, rather than whole embodiment.Based on the embodiment among the present invention, those of ordinary skills belong to the scope of protection of the invention not making the every other embodiment that is obtained under the creative work prerequisite.
Embodiment of the invention emphasis has solved based on the writing-write conflict of the GRIDDING WITH WEIGHTED AVERAGE of CUDA, and its main thought is: be the Task Distribution strategy of starting point with the gridding result.Being algorithm calculates the convolution window with the gridding result points, and (u is v) with magnetic resonance samples data M (u, v) this gridding value of convolutional calculation to dropping on sample track S in the window.Because the position uniqueness of gridding point, its corresponding convolution window also is unique, do not write-write conflict so do not exist in parallel processing, feasible GRIDDING WITH WEIGHTED AVERAGE execution result based on the CUDA technology is stable, thereby has guaranteed to obtain on all four final image at every turn.
Fig. 2 calculates principle schematic according to the GRIDDING WITH WEIGHTED AVERAGE convolution window of the embodiment of the invention.In the present embodiment, be to be starting point, for example with existing gridding result points, utilization drops on sample track S among the window I, and (u is v) with magnetic resonance samples data M (u, v) this gridding value of convolutional calculation, obtain result data 1, utilization drops on sample track S among the window II, and (u is v) with magnetic resonance samples data M (u, v) this gridding value of convolutional calculation, obtain result data 2, promptly under the situation that the input data are determined, output data is a determined value, thereby has avoided writing-write conflict.That is to say that because the position of gridding point is unique, even if there is crossover phenomenon in its corresponding convolution window, gridded data point does not exist yet and writes-write conflict during parallel processing.
Referring to Fig. 3, it is the image processing method process flow diagram based on the CUDA technology according to the embodiment of the invention, specifically comprises:
Step 301 is obtained basic data, determines gridding result data scale according to imaging resolution.
Concrete, above-mentioned basic data comprise algorithm input needed all data: sampled data M (u, v), the density compensation data
Figure G2009100132368D00111
Sample track data S (u, v), convolution kernel data C (u, v).Here, these basic datas need be written into the GPU video memory.
Above-mentioned gridding result data scale is to determine according to basic data that is written into and imaging results resolution [imagesize.x, imagesize.y].In the embodiment of the invention, the equal and opposite in direction of gridding result data scale and image resolution ratio.Gridding result data scale can be expressed as [gridsize.x, gridsize.y].Briefly, gridding result data scale just is meant the size of gridding matrix of consequence.For example, if imaging resolution is 512*512, the scale of gridding result data also is 512*512 so.
Step 302 is according to the coordinate of all elements in described gridding result data scale acquisition K spatial convoluted window and the window.
Be specially: according to gridding result data scale, create and carry out kernel function, carry out following steps by described execution kernel function:
I), obtain the middle point coordinate of gridding result data according to gridding result data scale;
Ii) according to four summits of described mid point coordinate Calculation K spatial convoluted window track value;
Iii) calculate the coordinate on four summits in the window according to described K spatial convoluted window summit track value;
Iv) determine the coordinate of all elements in the window according to the coordinate on four summits.
Wherein, the concrete steps that kernel function is carried out in above-mentioned establishment comprise: call by CUDAAPI, return GPU hardware parameter information, according to these hardware parameter Information Selection thread creation strategies, carry out kernel function thereby create.The process that kernel function itself is carried out in this establishment is a prior art, no longer describes in detail here.
After the establishment of execution kernel function finishes, make the corresponding thread of each gridded data point.Gridding result data scale is pressed blocksize.x * blocksize.y piecemeal, and [blocksize.x, blocksize.y] represents thread scale in the piece, produces so altogether
Figure G2009100132368D00121
Individual thread block, the corresponding thread of each gridded data point (being a point in the grid).The number of threads that thread should can not carried out at most simultaneously more than GPU multiprocessor group (SM) in the piece, but the thread block number can be more than stream handle (SP) number.
Above-mentioned steps in ii) according to the specific implementation process of four summits of described mid point coordinate Calculation K spatial convoluted window track value is:
x min = grid . x - gridcenter . x - convwidth . x gridsize . x * k max . x - - - ( 2 - 13 )
x max = grid . x - gridcenter . x + convwidth . x gridsize . x * k max . x - - - ( 2 - 14 )
y min = grid . y - gridcenter . y - convwidth . y gridsize . y * k max . y - - - ( 2 - 15 )
y max = grid . y - gridcenter . y + convwidth . y gridsize . y * k max . y - - - ( 2 - 16 )
The specific implementation process that above-mentioned steps is calculated the coordinate on four summits in the window according to described K spatial convoluted window summit track value in iii) is:
u min = ( x min + corX 2 ) * SampSize corX - - - ( 2 - 17 )
u max = ( x max + corX 2 ) * SampSize corX - - - ( 2 - 18 )
v min = ( y min + corY 2 ) * ScanMatrix corY - ScanMatrix - ETL 2 - - - ( 2 - 19 )
v max = ( y max + corY 2 ) * ScanMatrix corY - ScanMatrix - ETL 2 - - - ( 2 - 20 )
Wherein, corX, corY represent Cartesian coordinates X, Y-axis to normalization length respectively, and SampSize represents sampling number, ScanMatrix represents the sampled scan matrix size, and ETL represents echo train length, 0≤u<SampSize, 0≤v<ETL, undeclared variable are as hereinbefore.
Step 303 obtains the track value of this element successively according to the coordinate of each element in the described K spatial convoluted window, the track value of using described each element successively with the track value compute euclidian distances of K space center's point.
Wherein, calculate the track value of an element and the track value compute euclidian distances dk of K space center's point, be specially:
dkx = k max . x * grid . x - gridcenter . x gridsize . x - S ( p , q ) . x - - - ( 2 - 21 )
dky = k max . y * grid . y - gridcenter . y gridsize . y - S ( p , q ) . y - - - ( 2 - 22 )
dk = dkx 2 + dky 2
Wherein, (p, q) ∈ [(x min, y min), (x max, y max)], S (p, q) (u is v) in that (p, functional value q), dk represent the K space Euclidean distance of window interior element and convolution window central point to expression sampling function S; , undeclared variable as hereinbefore.
Repeat abovementioned steps, can calculate the track value of each element and the track value compute euclidian distances dk of K space center's point successively.
Step 304 if the Euclidean distance that is obtained is then carried out convolutional calculation in conjunction with described basic data to the sampled data in the pairing basic data of this element value less than first threshold, obtains the gridding calculation result data.
Wherein, the computing formula of first threshold d is:
d = ( convwidth . x gridsize . x * k max . x ) 2 + ( convwidth . y gridsize . y * k max . y ) 2 - - - ( 2 - 24 )
In conjunction with aforementioned basic data, the employing track of the dk<d that satisfies condition in the coordinate range [(u min, v min), (u max, v max)] is carried out convolution algorithm, obtain the gridding calculation result data, i.e. data point (grid.x, the value of grid.y) locating in the gridding.
Step 305 is carried out inversefouriertransform to described gridding result data, obtains view data.So far, obtained magnetic resonance image (MRI) based on CUDA.
Use the image processing method that the embodiment of the invention provides, well solved and write-write conflict based on the GRIDDING WITH WEIGHTED AVERAGE of CUDA based on the CUDA technology.CUDA realize result and CPU realization result only behind the decimal strong point 2-6 position have error; When initial conditions was consistent, execution result also was unique, all is on all four thereby guarantee finally to obtain image under the situation of initial conditions unanimity at every turn.
It is a kind of based on the image processing apparatus that calculates unified equipment framework technology that the embodiment of the invention also provides, and referring to Fig. 4, specifically comprises:
Acquisition module 401 is used to obtain basic data;
Data scale determination module 402 is used for determining gridding result data scale according to imaging resolution;
Coordinate Calculation module 403 is used for the coordinate according to all elements in described gridding result data scale acquisition K spatial convoluted window and the window;
Trajectory computation module 404 is used for obtaining according to the coordinate of each element in the described K spatial convoluted window successively the track value of this element;
Oldham distance calculating module 405, the track value that is used to use described each element successively with the track value compute euclidian distances of K space center's point;
Gridding computing module 406 is used for during less than first threshold, in conjunction with described basic data the sampled data in the pairing basic data of this element value being carried out convolutional calculation in the Euclidean distance that is obtained, and obtains the gridding calculation result data;
View data computing module 407 is used for described gridding calculation result data is carried out inversefouriertransform, obtains view data.
Above-mentioned basic data comprises: sampled data, density compensation data, sample track data and convolution kernel data.
Above-mentioned coordinate Calculation module 403 can specifically comprise:
The center point coordinate computing module is used for according to gridding result data scale, obtains the middle point coordinate of gridding result data;
The summit trajectory computation module is used for according to four summits of described mid point coordinate Calculation K spatial convoluted window track value;
The apex coordinate computing module is used for the coordinate according to four summits in the described K spatial convoluted window summit track value calculating window;
Window interior element coordinate Calculation module is used for determining according to the coordinate on four summits the coordinate of all elements in the window.
Use the image processing apparatus that the embodiment of the invention provides, well solved and write-write conflict based on the GRIDDING WITH WEIGHTED AVERAGE of CUDA based on the CUDA technology.CUDA realize result and CPU realization result only behind the decimal strong point 2-6 position have error; When initial conditions was consistent, execution result also was unique, all is on all four thereby guarantee finally to obtain image under the situation of initial conditions unanimity at every turn.
Under the described execution environment of table 1, use the method that the embodiment of the invention provided, obtain the reverse GRIDDING WITH WEIGHTED AVERAGE data based on CUDA as shown in table 3:
Table 3
Figure G2009100132368D00151
Under execution environment shown in the employing table 1, the image that application CPU and the described method of the embodiment of the invention obtain respectively as shown in Figure 5 and Figure 6.Fig. 5 is a CPU GRIDDING WITH WEIGHTED AVERAGE imaging results, and what Fig. 5 (a)-Fig. 5 (c) adopted successively is the header data of operation, static header data and stationary water modulus certificate; Fig. 6 is based on the reverse GRIDDING WITH WEIGHTED AVERAGE imaging results of CUDA of the embodiment of the invention, and what Fig. 6 (a)-Fig. 6 (c) adopted successively is the header data of operation, static header data and stationary water modulus certificate.From the result of above three groups of images as can be seen, reverse GRIDDING WITH WEIGHTED AVERAGE reconstructed results of CUDA and CPU GRIDDING WITH WEIGHTED AVERAGE reconstructed results are in full accord.
Need to prove that the related formula of this paper is more, for related variable in succinct every formula is not done explanation, can be with reference to the explanation of front formula.
Need to prove that for system embodiment, because it is substantially similar in appearance to method embodiment, so description is fairly simple, relevant part gets final product referring to the part explanation of method embodiment.
Need to prove, in this article, relational terms such as first and second grades only is used for an entity or operation are made a distinction with another entity or operation, and not necessarily requires or hint and have the relation of any this reality or in proper order between these entities or the operation.And, term " comprises ", " comprising " or its any other variant are intended to contain comprising of nonexcludability, thereby make and comprise that process, method, article or the equipment of a series of key elements not only comprise those key elements, but also comprise other key elements of clearly not listing, or also be included as this process, method, article or equipment intrinsic key element.Do not having under the situation of more restrictions, the key element that limits by statement " comprising ... ", and be not precluded within process, method, article or the equipment that comprises described key element and also have other identical element.
One of ordinary skill in the art will appreciate that all or part of step that realizes in the said method embodiment is to specify relevant hardware to finish by program, described program can be stored in the computer read/write memory medium, here the alleged storage medium that gets, as: ROM/RAM, magnetic disc, CD etc.
The above is preferred embodiment of the present invention only, is not to be used to limit protection scope of the present invention.All any modifications of being done within the spirit and principles in the present invention, be equal to replacement, improvement etc., all be included in protection scope of the present invention.

Claims (10)

1, a kind of image processing method based on unified calculation equipment framework technology is characterized in that, comprising:
Obtain basic data, determine gridding result data scale according to imaging resolution;
Obtain the coordinate of all elements in K spatial convoluted window and the window according to described gridding result data scale;
Obtain the track value of this element successively according to the coordinate of each element in the described K spatial convoluted window, the track value of using described each element successively with the track value compute euclidian distances of K space center's point;
If the Euclidean distance that is obtained is then carried out convolutional calculation in conjunction with described basic data to the sampled data in the pairing basic data of this element value less than first threshold, obtain the gridding calculation result data;
Described gridding calculation result data is carried out inversefouriertransform, obtain view data.
2, method according to claim 1 is characterized in that, described basic data comprises: sampled data, density compensation data, sample track data and convolution kernel data.
3, method according to claim 1; it is characterized in that; the step that obtains the coordinate of all elements in K spatial convoluted window and the window according to described gridding result data scale comprises: according to gridding result data scale; create and carry out kernel function, carry out following steps by described execution kernel function:
According to gridding result data scale, obtain the middle point coordinate of gridding result data;
According to four summits of described mid point coordinate Calculation K spatial convoluted window track value;
Calculate the coordinate on four summits in the window according to described K spatial convoluted window summit track value,
Determine the coordinate of all elements in the window according to the coordinate on four summits.
4, method according to claim 3 is characterized in that, comprises according to the step of four summits of described mid point coordinate Calculation K spatial convoluted window track value:
x min = grid . x - gridcenter . x - convwidth . x gridsize . x * k max . x
x max = grid . x - gridcenter . x + convwidth . x gridsize . x * k max . x
y min = grid . y - gridcenter . y - convwidth . y gridsize . y * k max . y
y max = grid . y - gridcenter . y + convwidth . y gridsize . y * k max . y
Wherein, grid.x represents the directions X coordinate figure of current gridding point; Grid.y represents the Y direction coordinate figure of current gridding point; Gridsize.x represents gridding directions X size as a result; Gridsize.y represents gridding Y direction size as a result; Gridcenter.x represents gridding central point directions X coordinate figure as a result; Gridcenter.y represents gridding central point Y direction coordinate figure as a result; Convwidth.x presentation video spatial convoluted window directions X size; Convwidth.y presentation video spatial convoluted window Y direction size; Kmax.x represents K space X direction maximal value; Kmax.y represents K space Y direction maximal value.
5, method according to claim 3 is characterized in that, calculates the coordinate on four summits in the window according to described K spatial convoluted window summit track value:
u min = ( x min + corX 2 ) * SampSize corX
u max = ( x max + corX 2 ) * SampSize corX
v min = ( y min + corY 2 ) * ScanMatrix corY - ScanMatrix - ETL 2
v max = ( y max + corY 2 ) * ScanMatrix corY - ScanMatrix - ETL 2
Wherein, corX, corY represent Cartesian coordinates X, Y-axis to normalization length respectively, and SampSize represents sampling number, and ScanMatrix represents the sampled scan matrix size, and ETL represents echo train length, 0≤u<SampSize, 0≤v<ETL.
6, method according to claim 1 is characterized in that, the track value of using described each element comprises with the step of the track value compute euclidian distances of K space center's point successively:
I) calculate the track value of an element and the track value compute euclidian distances dk of K space center's point, be specially:
dkx = k max . x * grid . x - gridcenter . x gridsize . x - S ( p , q ) . x
dky = k max . y * grid . y - gridcenter . y gridsize . y - S ( p , q ) . y
dk = dk x 2 + dk y 2
Wherein, (p, q) ∈ [(xmin, ymin), (xmax, ymax)], S (p, q) (u is v) in that (p, functional value q), dk represent the K space Euclidean distance of window interior element apart from the convolution window central point to expression sampling function S; Grid.x represents the directions X coordinate figure of current gridding point; Grid.y represents the Y direction coordinate figure of current gridding point; Gridsize.x represents gridding directions X size as a result; Gridsize.y represents gridding Y direction size as a result; Gridcenter.x represents gridding central point directions X coordinate figure as a result; Gridcenter.y represents gridding central point Y direction coordinate figure as a result; Kmax.x represents K space X direction maximal value; Kmax.y represents K space Y direction maximal value;
Ii) repeating step i), calculate the track value of each element and the track value compute euclidian distances dk of K space center's point successively.
7, method according to claim 1 is characterized in that, described first threshold d is:
d = ( convwidth . x gridsize . x * k max . x ) 2 + ( convwidth . y gridsize . y * k max . y ) 2
Wherein, convwidth.x presentation video spatial convoluted window directions X size; Convwidth.y presentation video spatial convoluted window Y direction size, gridsize.x represents gridding directions X size as a result; Gridsize.y represents gridding Y direction size as a result, and kmax.x represents K space X direction maximal value; Kmax.y represents K space Y direction maximal value.
8, a kind of image processing apparatus based on unified calculation equipment framework technology is characterized in that, comprising:
Acquisition module is used to obtain basic data;
The data scale determination module is used for determining gridding result data scale according to imaging resolution;
The coordinate Calculation module is used for the coordinate according to all elements in described gridding result data scale acquisition K spatial convoluted window and the window;
Trajectory computation module is used for obtaining according to the coordinate of each element in the described K spatial convoluted window successively the track value of this element;
Oldham distance calculating module, the track value that is used to use described each element successively with the track value compute euclidian distances of K space center's point;
The gridding computing module is used for during less than first threshold, in conjunction with described basic data the sampled data in the pairing basic data of this element value being carried out convolutional calculation in the Euclidean distance that is obtained, and obtains the gridding calculation result data;
The view data computing module is used for described gridding calculation result data is carried out inversefouriertransform, obtains view data.
9, device according to claim 8 is characterized in that, described basic data comprises: sampled data, density compensation data, sample track data and convolution kernel data.
10, device according to claim 8 is characterized in that, described coordinate Calculation module comprises:
The center point coordinate computing module is used for according to gridding result data scale, obtains the middle point coordinate of gridding result data;
The summit trajectory computation module is used for according to four summits of described mid point coordinate Calculation K spatial convoluted window track value;
The apex coordinate computing module is used for the coordinate according to four summits in the described K spatial convoluted window summit track value calculating window;
Window interior element coordinate Calculation module is used for determining according to the coordinate on four summits the coordinate of all elements in the window.
CN2009100132368A 2009-08-13 2009-08-13 Image processing method and device based on compute unified device architecture (CUDA) technology Expired - Fee Related CN101635046B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN2009100132368A CN101635046B (en) 2009-08-13 2009-08-13 Image processing method and device based on compute unified device architecture (CUDA) technology

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN2009100132368A CN101635046B (en) 2009-08-13 2009-08-13 Image processing method and device based on compute unified device architecture (CUDA) technology

Publications (2)

Publication Number Publication Date
CN101635046A true CN101635046A (en) 2010-01-27
CN101635046B CN101635046B (en) 2012-06-27

Family

ID=41594228

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2009100132368A Expired - Fee Related CN101635046B (en) 2009-08-13 2009-08-13 Image processing method and device based on compute unified device architecture (CUDA) technology

Country Status (1)

Country Link
CN (1) CN101635046B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102305918A (en) * 2011-05-24 2012-01-04 中国科学院武汉物理与数学研究所 Method for suppressing pseudo peak of nuclear magnetic resonance multi-dimensional spectrum
CN104408691A (en) * 2014-11-17 2015-03-11 南昌大学 GPU (Graphic Processing Unit)-based parallel selective masking smoothing method
CN110187962A (en) * 2019-04-26 2019-08-30 中国人民解放军战略支援部队信息工程大学 A kind of Gridding algorithm optimization method and device based on CUDA
CN113918356A (en) * 2021-12-13 2022-01-11 广东睿江云计算股份有限公司 Method and device for quickly synchronizing data based on CUDA (compute unified device architecture), computer equipment and storage medium

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102305918A (en) * 2011-05-24 2012-01-04 中国科学院武汉物理与数学研究所 Method for suppressing pseudo peak of nuclear magnetic resonance multi-dimensional spectrum
CN104408691A (en) * 2014-11-17 2015-03-11 南昌大学 GPU (Graphic Processing Unit)-based parallel selective masking smoothing method
CN110187962A (en) * 2019-04-26 2019-08-30 中国人民解放军战略支援部队信息工程大学 A kind of Gridding algorithm optimization method and device based on CUDA
CN113918356A (en) * 2021-12-13 2022-01-11 广东睿江云计算股份有限公司 Method and device for quickly synchronizing data based on CUDA (compute unified device architecture), computer equipment and storage medium
CN113918356B (en) * 2021-12-13 2022-02-18 广东睿江云计算股份有限公司 Method and device for quickly synchronizing data based on CUDA (compute unified device architecture), computer equipment and storage medium

Also Published As

Publication number Publication date
CN101635046B (en) 2012-06-27

Similar Documents

Publication Publication Date Title
Groen et al. Distinct contributions of functional and deep neural network features to representational similarity of scenes in human brain and behavior
TWI432975B (en) Data decomposition method and computer system therefrom
CN103336959B (en) A kind of vehicle checking method accelerated based on GPU multi-core parallel concurrent
Chatterjee et al. Extraction of binary black hole gravitational wave signals from detector data using deep learning
CN101635046B (en) Image processing method and device based on compute unified device architecture (CUDA) technology
CN109239631A (en) A kind of MR imaging method and device
Benomar et al. Visualizing software dynamicities with heat maps
López et al. Deviations from tidal torque theory: Evolution of the halo spin–filament alignment
Moreno et al. Source-agnostic gravitational-wave detection with recurrent autoencoders
CN104123119B (en) Dynamic vision measurement feature point center quick positioning method based on GPU
Marques et al. Testing for Granger causality in quantiles between the wage share in income and productive capacity utilization
CN109410136A (en) Even color method and processing unit based on most short transmission path
CN110246200A (en) Mr cardiac film imaging method, device and magnetic resonance scanner
Wang et al. Adapting the residual dense network for seismic data denoising and upscaling
CN115797493B (en) Magnetic field free line magnetic particle imaging method based on one-dimensional system matrix sparse sampling
CN113628318B (en) Distributed real-time neuron rendering method and system based on ray tracing
CN101930079A (en) Method for processing relevant/stack data in seismic prospecting
Rahman et al. GPU implementation of motion estimation for visual saliency
Lasserre et al. Easypap: A framework for learning parallel programming
Jiang et al. Algorithm-oriented SIMD computer mathematical model and its application
Li et al. Incorporation of adaptive compression into a GPU parallel computing framework for analyzing large-scale vessel trajectories
Zhou et al. Training-Free Transformer Architecture Search With Zero-Cost Proxy Guided Evolution
Han et al. Interactive Visualization of Time-Varying Flow Fields Using Particle Tracing Neural Networks
Fei et al. “Missing-As-Complete”(MAC) Strategy and Hybrid Loss Guided Network Training for Seismic Data Reconstruction
Liu et al. FAD-Net: Fake Images Detection and Generalization Based on Frequency Domain Transformation

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20120627

Termination date: 20200813

CF01 Termination of patent right due to non-payment of annual fee